Anthropic announced a new policy for processing user data in the Claude service. From now on, the company will use new chats and coding sessions to train its AI models unless users explicitly disable this option in the settings. The changes apply to all plans, including Claude Free, Pro, and Max, but do not affect enterprise products such as Claude for Work, Claude for Education, Claude Gov, and API access through partners like Amazon Bedrock or Google Cloud Vertex AI.
Users must make a choice regarding the processing of their data by September 28, 2025. New users choose settings during registration, while existing users do so through a special window that appears upon logging into the service. If no position is specified, the system automatically enables permission to use data for AI training. Permission can be revoked in privacy settings at any time, but this will not affect data already included in the training set.
If a user allows the use of their chats and coding sessions, Anthropic retains such data for up to five years to improve AI and protect the service. In case of refusal, the company retains information only for 30 days. The policy does not apply to previous chats or sessions that were not restored after the implementation of the new rules.
Anthropic also reports that it uses tools and automated processes to filter or hide confidential information and does not sell data to third parties.