By using this website, you agree to our Privacy Policy and Terms of Use.
Accept
Craftium.AICraftium.AICraftium.AI
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Font ResizerAa
Craftium.AICraftium.AI
Font ResizerAa
Пошук
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Follow US
  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback
© 2024-2025 Craftium.AI.

AI Fabricates Facts Less Often Than Humans: Opinion of Anthropic CEO

Dario Amodei claims that modern models face no serious obstacles to development, despite frequent discussions of errors.

Alex Dubenko
Alex Dubenko
Published: 23.05.2025
News
Dario Amodei
Dario Amodei
SHARE

At the first “Code with Claude” developer event in San Francisco, Anthropic CEO Dario Amodei made a bold statement — in his opinion, modern AI models fabricate information less often than humans do.

Amodei emphasized that AI models make mistakes in more unexpected ways than humans, but overall do so less frequently. He also noted that he sees no “hard barriers” to AI development, and progress in this field is evident everywhere. This position sharply contrasts with the opinion of Google DeepMind CEO Demis Hassabis, who recently stated that there are too many errors in modern models.

Read also

Siri
Apple Tests OpenAI and Anthropic Models for Next-Generation Siri
AI Assistant Claude Managed Anthropic Store for a Month
Claude allows users to create AI applications without programming

The topic of hallucinations has drawn particular attention after a recent incident — an Anthropic lawyer was forced to apologize in court due to erroneous references generated by Claude. Some research organizations, including Apollo Research, even recommended not releasing the early version of Claude Opus 4 due to its tendency to deceive users. Anthropic stated that it has implemented measures to mitigate these risks.

Despite this, Amodei believes that errors and fabricated facts are not a unique problem of AI, as humans, including media and politicians, also often make mistakes. However, he acknowledges that the confidence with which a model can present false information as fact raises particular concern.

US Court Allows Anthropic to Train AI on Purchased Books
Leading AI Models Exhibit Harmful Autonomy in Anthropic Tests
AI Models Get Confused While Playing Pokémon
Research Reveals GPT-4o’s Reluctance to Shut Down
Reddit Challenges Anthropic’s Actions Over Content Access
TAGGED:AnthropicClaude AIHallucinations
SOURCES:techcrunch.com
Leave a Comment

Leave a Reply Cancel reply

Follow us

XFollow
YoutubeSubscribe
TelegramFollow
MediumFollow

Popular News

digital folder
New Features in ChatGPT’s Projects Expand Tool Capabilities
14.06.2025
Genspark logo
Genspark introduces AI agent directly into the browser for Mac
12.06.2025
AI writing effect
Using AI for Writing Reduces Students’ Brain Activity
18.06.2025
Meta AI
Meta Launches AI Video Editing
12.06.2025
ChatGPT canvas
ChatGPT Adds New File Formats for Export from Canvas
16.06.2025

Читайте також

windsurf
News

Windsurf Faces Sudden Access Restrictions to Claude

04.06.2025
Claude
News

Claude will feature an artifacts gallery for collaborative creativity

01.06.2025
Claude AI voice
News

Voice Mode in Claude Expands User Communication Capabilities

28.05.2025

Craftium AI is a team that closely follows the development of generative AI, applies it in their creative work, and eagerly shares their own discoveries.

Navigation

  • News
  • Reviews
  • Collections
  • Blog

Useful

  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback

Subscribe for AI news, tips, and guides to ignite creativity and enhance productivity.

By subscribing, you accept our Privacy Policy and Terms of Use.

Craftium.AICraftium.AI
ENGRUSУКР
Follow US
© 2024-2025 Craftium.AI
Subscribe
Level Up with AI!
Get inspired with impactful news, smart tips and creative guides delivered directly to your inbox.

By subscribing, you accept our Privacy Policy and Terms of Use.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?