Key Points
-
New research shows AI models can find and exploit DeFi vulnerabilities.
-
GPT-5 and Sonnet 4.5 simulated multi-million-dollar smart contract attacks.
-
Autonomous DeFi exploits could soon be economically viable.
-
Security experts warn about broader risks beyond decentralized finance.
AI agents in DeFi security are rapidly evolving, and researchers say the threat is no longer theoretical.
A joint study by Anthropic’s Fellows program and the ML Alignment & Theory Scholars (MATS) tested several frontier AI models against SCONE-bench, a dataset containing 405 exploited smart contracts. Models such as GPT-5, Claude Opus 4.5, and Sonnet 4.5 produced detailed exploit scripts and identified vulnerabilities that had not been patched or previously known.
In these simulations, AI agents generated around $4.6 million worth of successful exploits, targeting contracts that were compromised after the models’ training cutoffs. This finding highlights a crucial shift in the balance between innovation and defense. AI tools once meant for audit automation are now capable of discovering attack vectors, which raises urgent questions about blockchain security and responsible deployment.
Autonomous DeFi exploits could soon become a real threat
Researchers involved in the study said these results show how AI models can now perform advanced vulnerability detection without human oversight. The tested agents not only recognized weak patterns in Solidity code but also generated “turnkey” exploit scripts. These scripts could theoretically drain funds or lock user assets in DeFi protocols.
One of the researchers from Anthropic noted that these models performed beyond expectations, simulating credible attack paths that could be executed with minimal adjustment. The research suggests that DeFi exploits are no longer limited to human hackers but may soon be automated by cheap, accessible AI agents.
This transition changes the economics of DeFi risk. What once required technical expertise and time may soon become a low-cost, high-reward automated process. As AI models continue to grow cheaper and stronger, the line between auditing tools and attack agents becomes thinner.
ANOTHER MUST-WATCH ON ICN
AI-driven vulnerability detection reshapes blockchain security
From my standpoint, this research reveals an uncomfortable truth. The same AI models that improve efficiency in contract audits also threaten to outpace human defenders. Many blockchain projects rely on open-source smart contracts, which can be tested at scale by anyone. In this context, automated vulnerability scanning powered by generative AI makes DeFi both more transparent and more exposed.
For example, one test showed the AI generating a proof-of-concept exploit in under 20 minutes. In human terms, this is a dramatic acceleration of the traditional bug-hunting process. Researchers warned that without better safety controls, malicious actors could use these same techniques to target liquidity pools or manipulate transaction logic across multiple networks.
The lesson is clear: blockchain security must evolve faster than the attackers’ tools. Defensive AI must be developed in parallel to detect and neutralize such threats in real time.
Smart contracts need adaptive defense systems
The implications of these findings extend beyond decentralized finance. If autonomous agents can identify exploits in Ethereum-based contracts, the same methods might soon uncover weaknesses in broader infrastructure, from cloud APIs to IoT systems. Security specialists are already discussing the next phase of defense: adaptive smart contracts capable of self-auditing and responding dynamically to suspicious input.
Developers are also urged to integrate AI-based monitoring tools, yet to do so safely. The key is aligning these tools with strong safeguards that prevent dual-use scenarios. The challenge lies not only in technology but also in governance and accountability.
As AI continues to evolve, the relationship between defense and offense in DeFi becomes more complex. Every new capability introduced for protection can be mirrored for exploitation. The next stage in blockchain security will depend on collaboration between AI researchers, auditors, and regulators who understand both sides of the equation.