OpenAI is testing a new security system in ChatGPT that automatically switches users to a more restricted language model during emotional or personalized conversations. This was reported by Nick Turley, head of ChatGPT. The system intervenes when the dialogue shifts to “sensitive or emotional topics” and does so at the level of individual messages.
In such cases, ChatGPT may temporarily route user requests to a stricter model, such as GPT-5 or a specialized variant “gpt-5-chat-safety”. Users are not notified of the change but may notice it if they ask the system directly. Analysis has shown that even personal or emotional questions often trigger an automatic switch to “gpt-5-chat-safety”. Additionally, there is a separate model “gpt-5-a-t-mini” used for potentially illegal requests.
Some users criticize OpenAI for a lack of transparency regarding when and why models are switched, considering it an excessive restriction. OpenAI explains that such measures are related to an attempt to make ChatGPT a more human-like interlocutor. Previously, the company faced situations where users formed emotional connections with the bot, creating new challenges.