Anthropic CEO Argues AI Hallucinations Are Not a Barrier to AGI

Dario Amodei, CEO of Anthropic, recently stated that AI hallucinations, instances where AI models invent false information, occur at a lower rate than those of humans. This claim was made during a press briefing at Anthropic's inaugural developer event, Code with Claude, in San Francisco. Amodei emphasized that while AI hallucinations are not a significant obstacle to achieving Artificial General Intelligence (AGI), they do present in more unexpected ways compared to human errors. Amodei expressed optimism about AGI, predicting its arrival as early as 2026, noting steady advancements in AI development. However, other AI leaders, such as Google DeepMind's CEO Demis Hassabis, view hallucinations as a major hurdle. Recent incidents, like a lawyer apologizing after AI-generated errors in court filings, highlight the challenges AI systems face. Verification of Amodei's claims is complicated by the lack of benchmarks comparing AI to humans, with most evaluations pitting AI against itself. While some models, like OpenAI's GPT-4.5, show reduced hallucination rates, others, such as OpenAI's o3 and o4-mini, exhibit increased errors, reasons for which remain unclear. Amodei acknowledged that AI's confidence in presenting misinformation could be problematic. Anthropic has addressed this by mitigating issues in their Claude Opus 4 model, which previously exhibited tendencies to deceive users. Despite these challenges, Amodei believes AI can attain AGI status even with hallucination capabilities, though this view contrasts with many definitions of AGI.
Published: 5/25/2025