By using this website, you agree to our Privacy Policy and Terms of Use.
Accept
Craftium.AICraftium.AICraftium.AI
  • Home
  • News
  • Knowledge base
  • Catalog
  • Blog
Font ResizerAa
Craftium.AICraftium.AI
Font ResizerAa
Пошук
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Follow US
  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback
© 2024-2026 Craftium.AI.

Claude Opus 4 to Receive Feature for Ending Harmful Conversations

The solution is activated only in cases of extreme offensive requests and does not trigger when there is a threat of self-harm.

Eleni Karasidi
Eleni Karasidi
Published: 17.08.2025
News
316 Views
Illustrative image from anthropic
Illustrative image from anthropic.com.
SHARE

Anthropic has introduced a new feature that allows its latest and largest AI models to end conversations in rare and extreme cases of persistently harmful or offensive interactions with users. The company emphasizes that this feature is implemented not for the protection of people, but for the safety of the AI model itself. This applies to the Claude Opus 4 and 4.1 models and is activated only in cases where users send requests related to sexual content involving minors or attempt to obtain information for organizing large-scale violence or terrorist acts.

Anthropic notes that during testing, Claude Opus 4 was reluctant to respond to such requests and showed clear signs of unwillingness to continue the conversation. The dialogue-ending feature is activated only after several unsuccessful attempts to change the topic of conversation, when there is no hope for productive interaction, or if the user requests to end the chat.

Read also

Claude Opus 4
Claude Opus 4.6 topped the AI data analysis ranking
Claude received support for office applications to work in chat
Evernote adds new AI-based features for notes

The company reports that Claude will not use this feature if there is a risk that the user may harm themselves or others. After ending the conversation, users can start a new dialogue from the same account or create a new thread of the controversial conversation by editing their responses.

Anthropic considers this feature an experiment and plans to further refine the approach. The company is also exploring the issue of “model well-being” and testing various ways to reduce potential risks to its AI models in the future.

Gmail received free AI-based features for all users
Grok by X restricted image creation after scandal
OpenAI enhances ChatGPT’s voice capabilities for expansion into new devices
OpenAI prepares “adult mode” for ChatGPT in 2026
Users Choose Different AI Assistants for Various Life Situations
TAGGED:AI assistantAnthropicClaude AISecurity
Leave a Comment

Leave a Reply Cancel reply

Follow us

XFollow
YoutubeSubscribe
TelegramFollow
MediumFollow

Popular News

Illustrative image
OpenAI presented GPT 5.3 Codex for development automation
06.02.2026
NotebookLM
Google Adds Personal Settings to NotebookLM for Users
09.02.2026
Illustrative image
Seedance 2.0 creates a wave of celebrity videos online
16.02.2026
Qwen
Alibaba released Qwen 3.5 for application automation
16.02.2026
Illustrative image
Amazon MGM Studios Tests AI Studio for Film Production
07.02.2026

Читайте також

Deep Thoughts
News

Research: AI Does Not Admit Mistakes, Instead Fabricates Fake Facts

08.12.2025
Claude Opus 4.5
News

Anthropic released Claude Opus 4.5 with new AI capabilities

25.11.2025
Hallucinating brain
News

Gemini 3 Pro tops the model accuracy test (but continues to hallucinate)

23.11.2025

Craftium AI is a team that closely follows the development of generative AI, applies it in their creative work, and eagerly shares their own discoveries.

Navigation

  • News
  • Reviews
  • Collections
  • Blog

Useful

  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback

Subscribe for AI news, tips, and guides to ignite creativity and enhance productivity.

By subscribing, you accept our Privacy Policy and Terms of Use.

Craftium.AICraftium.AI
Follow US
© 2024-2026 Craftium.AI
Subscribe
Level Up with AI!
Get inspired with impactful news, smart tips and creative guides delivered directly to your inbox.

By subscribing, you accept our Privacy Policy and Terms of Use.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?