OpenAI has removed the feature that allowed ChatGPT users to make their conversations accessible to search engines like Google. This decision was made just hours after a significant amount of criticism regarding user privacy risks appeared on social media. The feature was experimental and operated on an opt-in basis: users had to share the chat themselves and check a box to make the conversation searchable.
Users noticed that using the search query site:chatgpt.com/share could find thousands of published conversations, including both everyday questions and personal or professionally sensitive topics. Many conversations included names, places of residence, and other personal data. The OpenAI security team explained that current measures did not provide sufficient protection against the accidental spread of private information.
Developers noted that the feature was created to share useful conversations, but it quickly became apparent that users might accidentally disclose what they did not intend to make public. Even a few additional steps for activation could not prevent the unwanted spread of personal data. Similar situations have occurred with other companies when private conversations with Bard or Meta AI appeared in open access.
After the incident, OpenAI promptly disabled the feature and promised to review its approach to user privacy protection. For business users, this was another reminder of the importance of careful attention to settings and data protection policies in AI services. The company emphasized that corporate and team accounts have separate protection mechanisms, but the incident affected trust in such tools among regular users.