• bitcoinBitcoin (BTC) $ 42,977.00 0.18%
  • ethereumEthereum (ETH) $ 2,365.53 1.12%
  • tetherTether (USDT) $ 1.00 0.2%
  • bnbBNB (BNB) $ 302.66 0.19%
  • solanaSolana (SOL) $ 95.44 1.28%
  • xrpXRP (XRP) $ 0.501444 0.1%
  • usd-coinUSDC (USDC) $ 0.996294 0.34%
  • staked-etherLido Staked Ether (STETH) $ 2,367.26 1.4%
  • cardanoCardano (ADA) $ 0.481226 2.68%
  • avalanche-2Avalanche (AVAX) $ 34.37 1.19%
  • bitcoinBitcoin (BTC) $ 42,977.00 0.18%
    ethereumEthereum (ETH) $ 2,365.53 1.12%
    tetherTether (USDT) $ 1.00 0.2%
    bnbBNB (BNB) $ 302.66 0.19%
    solanaSolana (SOL) $ 95.44 1.28%
    xrpXRP (XRP) $ 0.501444 0.1%
    usd-coinUSDC (USDC) $ 0.996294 0.34%
    staked-etherLido Staked Ether (STETH) $ 2,367.26 1.4%
    cardanoCardano (ADA) $ 0.481226 2.68%
    avalanche-2Avalanche (AVAX) $ 34.37 1.19%
image-alt-1BTC Dominance: 58.93%
image-alt-2 ETH Dominance: 12.89%
image-alt-3 BTC/ETH Ratio: 26.62%
image-alt-4 Total Market Cap 24h: $2.51T
image-alt-5Volume 24h: $144.96B
image-alt-6 ETH Gas Price: 5.1 Gwei
 

MORE FROM SPONSORED

LIVE Web3 News

 

ARTICLE INFORMATION

AI agents in DeFi security

AI agents in DeFi security expose real smart contract vulnerability risks

Yousef Haddad

Key Points

  • New research shows AI models can find and exploit DeFi vulnerabilities.

  • GPT-5 and Sonnet 4.5 simulated multi-million-dollar smart contract attacks.

  • Autonomous DeFi exploits could soon be economically viable.

  • Security experts warn about broader risks beyond decentralized finance.


AI agents in DeFi security are rapidly evolving, and researchers say the threat is no longer theoretical.

A joint study by Anthropic’s Fellows program and the ML Alignment & Theory Scholars (MATS) tested several frontier AI models against SCONE-bench, a dataset containing 405 exploited smart contracts. Models such as GPT-5, Claude Opus 4.5, and Sonnet 4.5 produced detailed exploit scripts and identified vulnerabilities that had not been patched or previously known.

In these simulations, AI agents generated around $4.6 million worth of successful exploits, targeting contracts that were compromised after the models’ training cutoffs. This finding highlights a crucial shift in the balance between innovation and defense. AI tools once meant for audit automation are now capable of discovering attack vectors, which raises urgent questions about blockchain security and responsible deployment.

Autonomous DeFi exploits could soon become a real threat

Researchers involved in the study said these results show how AI models can now perform advanced vulnerability detection without human oversight. The tested agents not only recognized weak patterns in Solidity code but also generated “turnkey” exploit scripts. These scripts could theoretically drain funds or lock user assets in DeFi protocols.

One of the researchers from Anthropic noted that these models performed beyond expectations, simulating credible attack paths that could be executed with minimal adjustment. The research suggests that DeFi exploits are no longer limited to human hackers but may soon be automated by cheap, accessible AI agents.

This transition changes the economics of DeFi risk. What once required technical expertise and time may soon become a low-cost, high-reward automated process. As AI models continue to grow cheaper and stronger, the line between auditing tools and attack agents becomes thinner.


ANOTHER MUST-WATCH ON ICN

AI-driven vulnerability detection reshapes blockchain security

From my standpoint, this research reveals an uncomfortable truth. The same AI models that improve efficiency in contract audits also threaten to outpace human defenders. Many blockchain projects rely on open-source smart contracts, which can be tested at scale by anyone. In this context, automated vulnerability scanning powered by generative AI makes DeFi both more transparent and more exposed.

For example, one test showed the AI generating a proof-of-concept exploit in under 20 minutes. In human terms, this is a dramatic acceleration of the traditional bug-hunting process. Researchers warned that without better safety controls, malicious actors could use these same techniques to target liquidity pools or manipulate transaction logic across multiple networks.

The lesson is clear: blockchain security must evolve faster than the attackers’ tools. Defensive AI must be developed in parallel to detect and neutralize such threats in real time.

Smart contracts need adaptive defense systems

The implications of these findings extend beyond decentralized finance. If autonomous agents can identify exploits in Ethereum-based contracts, the same methods might soon uncover weaknesses in broader infrastructure, from cloud APIs to IoT systems. Security specialists are already discussing the next phase of defense: adaptive smart contracts capable of self-auditing and responding dynamically to suspicious input.

Developers are also urged to integrate AI-based monitoring tools, yet to do so safely. The key is aligning these tools with strong safeguards that prevent dual-use scenarios. The challenge lies not only in technology but also in governance and accountability.

As AI continues to evolve, the relationship between defense and offense in DeFi becomes more complex. Every new capability introduced for protection can be mirrored for exploitation. The next stage in blockchain security will depend on collaboration between AI researchers, auditors, and regulators who understand both sides of the equation.

SHARE

How did researchers test AI agents in DeFi security?

Researchers from the Anthropic Fellows and MATS programs evaluated advanced AI models on SCONE-bench, a dataset of 405 exploited smart contracts. The models, including GPT-5 and Sonnet 4.5, were tasked with identifying vulnerabilities and creating exploit scripts. Their success rate revealed that AI can already simulate complex DeFi exploits without prior exposure to specific hacks. This experiment demonstrates how automated agents could soon conduct or assist in real-world cyberattacks, prompting urgent discussions around security controls and responsible AI research.

Why is AI-driven vulnerability detection important for blockchain security?

AI-driven vulnerability detection provides speed and precision in finding flaws across large volumes of smart contracts. Traditional auditing methods are slow and expensive, often missing subtle logic errors. AI systems can analyze thousands of lines of code in minutes, detecting patterns that predict potential exploits. This capability helps secure DeFi platforms before attackers strike, but the same technology, if misused, could automate attacks. Balancing accessibility with control is therefore a top concern for blockchain developers and regulators alike.

Could automated DeFi exploits become common in the future?

Yes, automated DeFi exploits may become common as AI models get cheaper and more accessible. What once required skilled hackers could soon be run by anyone with a script and an API key. The research from Anthropic shows that today’s models can already simulate attack vectors that mirror real-world breaches. Unless defenses evolve, DeFi protocols could face constant low-cost attacks by AI-powered systems. Developers must now adopt AI-based security auditing to stay ahead.

What steps can developers take to defend against AI-based smart contract attacks?

Developers should integrate continuous auditing, AI-based anomaly detection, and on-chain monitoring to detect early-stage threats. Code reviews must become more dynamic, combining human judgment with machine precision.

FEATURED

EVENTS

Days
Hr
Min
Sec
 

ICN TALKS EPISODES