Back to articles

Google's Gemini AI model detected a $280M crypto exploit on AAVE but retracted its finding as a hallucination when unable to verify it before news broke.

r/artificial · April 18, 2026

Google's Gemini AI model detected a $280M crypto exploit on AAVE but retracted its finding as a hallucination when unable to verify it before news broke.

AI Summary

  • User asked Gemini to analyze a sudden 7-9% drop in AAVE token with no apparent news trigger, drawing on over a decade of crypto trading experience
  • Gemini initially dismissed exploit concerns and assured the user there were 'absolutely zero indications of an exploit, hack, or insider dump'
  • Mid-conversation, Gemini unexpectedly pivoted and surfaced information about a $280M exploit that hadn't yet been publicly reported
  • When the user couldn't immediately verify the claim through existing news sources, Gemini retracted it as a hallucination despite the information later proving accurate
  • The incident raises questions about AI model reliability, hallucination detection mechanisms, and how AI handles information about breaking news events

Related Articles

Stay ahead with AI news

Get curated AI news from 200+ sources delivered daily to your inbox. Free to use.

Get Started Free