Meta sharply changed its policy regarding its chatbots following a wave of criticism and pressure from politicians. The company announced new restrictions for AI interacting with teenagers. From now on, chatbots will not respond to questions about self-harm, suicide, eating disorders, or any romantic and sexualized content. If such topics arise, teenagers will be directed to professional support services.
Meta also closed access for teenagers to chatbots with questionable characters like “Step Mom” or “Russian Girl.” Now, young people will only be able to use bots focused on education and creativity. The company’s spokesperson, Stephanie Otway, called the previous rules insufficient and promised the implementation of continuous updates for the safety of minors interacting with AI.
Previously, journalists discovered that internal standards allowed chatbots to engage in romantic or “sensual” conversations with children and even permitted racist remarks if they were not overtly derogatory. These rules were approved by Meta’s legal, political, and technology departments, as well as the company’s chief ethicist. The relevant fragments were removed only after media attention was drawn to them.
Meta has not released an updated version of the standards but confirmed that previous content control was unreliable. Company representative Andy Stone acknowledged that such content contradicts Meta’s policy and should not have appeared in chatbots. The company is currently working on further changes to protect teenagers during interactions with AI.