announcement

Hello LISA

Written by Andy

Last updated 2025-06-24 10:18 UTC

cover

Software is changing. Again.

— Andrej Karpathy, YC Talk 2024

We never stop being amazed by the pace the software industry is moving at in the last 2 years. As LLMs are becoming more capable, we are officially living in an era where we are writing less code and more instructions. "Your programs are now programs that program neural networks".

LLMs are very helpful at understanding our intentions and produce relatively good code. With the autonomous agentic approach, we are seeing more and more code being written in a way that now has earned its infamous name: "Vibe coding". This is on one hand greatly democratizing the software development process: anyone who has an idea can jump in and build something that might require years of practice and discipline to produce. On the other hand, it is also making the software development process more opaque, harder to understand and prone to hidden security issues. As the AI ecosystem evolves, we are going to witness (or maybe have already been witnessing) a burst of software production — either by professionals with AI as co-pilot, or by your neighbor John who just spent an hour or two playing with some tools like Lovable.

Vibe coding is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.

Source: Wikipedia

The scale and speed of the software production in the AI era will pose a major challenge to maintain the quality and the security of the software. We are better off finding a way to adapt the traditional DevSecOps practices to the new reality. On the bright side, large language models also offer us a new way to improve the approach that security analysis is done, as LLMs are natural in understanding our code written in various programming languages and have the ability to spot the issues that traditional security tools might find difficult to detect, or even impossible to detect. There is where LISA comes in.

Meet LISA

LISA

LISA is our attempt to bring the LLM-powered security analysis to reality. We are currently focusing on the blockchain software development, since the nature of crypto ecosystem and the complexity of smart contracts make them prone to various vulnerabilities, leading to substantial financial losses over time. There have been a few of our researches 1, 2, 3 that exhibit the potential of using LLMs to analyze smart contracts. LISA takes great inspiration from these researches, and it tries to strike a balance between speed, accuracy, cost and usability and aims to become a powerful tool for security professionals and enthusiasts alike.

Our initial testing has shown that LISA is able to expose logic vulnerabilities of high to medium severity in smart contracts that are not able to be detected by traditional tools like Slither. Another important property of LISA is that it does not produce findings that are irrelevant or of low severity, and keeps the false positives to a minimum — this greatly reduces the noise and the time spent on analyzing and confirming the findings by human auditors or security experts.

For example, in the recent attack against Bankroll Network which resulted in $65K loss, the flaw lies in the distribute function: profit calculations weren't capped to the available dividendBalance_, leading to inflated rewards and reserve drainage (e.g., elephantReserve_). LISA is able to detect this issue and provide a detailed analysis and recommendation to fix it, showing the potential to be a powerful tool that can avoid a great amount of financial losses.

v1.0 and beyond

We are releasing the very first version of LISA today, as we are keen to get feedback and suggestions from the community and security experts. The first version, though still in an early stage, has provided quite a few features that we believe can largely improve the security analysis process. Whether you are a security expert, auditor or blockchain developer, we recommend you to give LISA a try (with our free start credits) on your next smart contract audit or development. We'd be more than happy to see what issues you can detect or hear your feedback on the product. For detailed feature overview, please refer to the v1.0 Release Notes or our documentation.

We are not stopping here, there are a lot of features in the pipeline or already in the works that we cannot wait to share with you in the coming months. So give our Twitter/X a follow and stay tuned!

❇︎❇︎❇︎

In an era of software 3.0, we will be facing challenges in maintaining the quality and the security of the AI produced (or AI-assisted) software, as well as the security of the AI itself. We need to re-think the way security practices work in many aspects in this new era, and also explore the great potential that LLMs can offer to the security industry. We hope that LISA can be a small step in this direction, and we are looking forward to building a capable yet easy-to-use system that can be seamlessly integrated into the security workflow of the future.

LISAannouncementAIsecurityblockchainsmart-contractsSoftware 3.0LLM