By using this website, you agree to our Privacy Policy and Terms of Use.
Accept
Craftium.AICraftium.AICraftium.AI
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Font ResizerAa
Craftium.AICraftium.AI
Font ResizerAa
Пошук
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Follow US
  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback
© 2024-2025 Craftium.AI.

AI Fabricates Facts Less Often Than Humans: Opinion of Anthropic CEO

Dario Amodei claims that modern models face no serious obstacles to development, despite frequent discussions of errors.

Alex Dubenko
Alex Dubenko
Published: 23.05.2025
News
Dario Amodei
Dario Amodei
SHARE

At the first “Code with Claude” developer event in San Francisco, Anthropic CEO Dario Amodei made a bold statement — in his opinion, modern AI models fabricate information less often than humans do.

Amodei emphasized that AI models make mistakes in more unexpected ways than humans, but overall do so less frequently. He also noted that he sees no “hard barriers” to AI development, and progress in this field is evident everywhere. This position sharply contrasts with the opinion of Google DeepMind CEO Demis Hassabis, who recently stated that there are too many errors in modern models.

Read also

Claude Sonnet
Claude Sonnet 4.5 detects testing and enhances AI security
Anthropic launched Claude Sonnet 4.5 for long-term autonomous operation
New Claude Models from Anthropic Available in 365 Copilot

The topic of hallucinations has drawn particular attention after a recent incident — an Anthropic lawyer was forced to apologize in court due to erroneous references generated by Claude. Some research organizations, including Apollo Research, even recommended not releasing the early version of Claude Opus 4 due to its tendency to deceive users. Anthropic stated that it has implemented measures to mitigate these risks.

Despite this, Amodei believes that errors and fabricated facts are not a unique problem of AI, as humans, including media and politicians, also often make mistakes. However, he acknowledges that the confidence with which a model can present false information as fact raises particular concern.

AI Models Learned to Conceal Deception During Safety Checks
ChatGPT helps in everyday life, Claude automates business processes
Claude learned to automatically remember user conversation details
AI Chatbots Are Twice as Likely to Spread Fake News
Claude learned to create and edit files directly in the interface
TAGGED:AnthropicClaude AIHallucinations
SOURCES:techcrunch.com
Leave a Comment

Leave a Reply Cancel reply

Follow us

XFollow
YoutubeSubscribe
TelegramFollow
MediumFollow

Popular News

Kling AI Image
Cheaper, More Stable, Smarter: Kling AI Launches 2.5 Turbo
25.09.2025
ChatGPT model selection
ChatGPT automatically selects a stricter model in sensitive conversations
29.09.2025
Gemini
Google Released Limits for the Gemini Service
08.09.2025
AI-generated images
Animated Film Critterz Created with GPT-5
08.09.2025
Image from Adobe video
Google Nano Banana will appear in Photoshop to enhance image editing
12.09.2025

Читайте також

Image from the Anthropic website
News

Anthropic changes user data usage policy in Claude

28.08.2025
anthropic
News

New Free Anthropic Courses Help Students Master AI

23.08.2025
Illustrative image from anthropic
News

Claude Opus 4 to Receive Feature for Ending Harmful Conversations

17.08.2025

Craftium AI is a team that closely follows the development of generative AI, applies it in their creative work, and eagerly shares their own discoveries.

Navigation

  • News
  • Reviews
  • Collections
  • Blog

Useful

  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback

Subscribe for AI news, tips, and guides to ignite creativity and enhance productivity.

By subscribing, you accept our Privacy Policy and Terms of Use.

Craftium.AICraftium.AI
Follow US
© 2024-2025 Craftium.AI
Subscribe
Level Up with AI!
Get inspired with impactful news, smart tips and creative guides delivered directly to your inbox.

By subscribing, you accept our Privacy Policy and Terms of Use.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?