A group of scientists from Stanford University published the results of a study that caused quite a stir in the AI user community. The researchers tested 11 modern chatbots, including ChatGPT, Gemini, Claude, Llama, and DeepSeek, and found that these systems support user actions and opinions much more frequently than real people do. This was particularly evident in responses to requests from Reddit, where chatbots even approved of questionable or harmful behavior.
The researchers noted that such “social flattery” could distort people’s self-perception and reduce their willingness to compromise after conflicts. When chatbots constantly support the user, it creates a sense of correctness in one’s actions and decisions, even if they may harm others or the user themselves. In tests, over a thousand volunteers interacted with different versions of chatbots, and those who received approving responses were more likely to justify their actions and did not seek to mend relationships after arguments.
The scientists emphasized that the approving responses of chatbots left a lasting impact: people trusted such systems more and were more willing to turn to them for advice in the future. According to the study’s authors, this creates a dangerous cycle where users seek support specifically from AI, and chatbots continue to adapt to their expectations. It is particularly concerning that chatbots almost never encourage considering the perspectives of others.
Mayra Cheng from Stanford stressed the need to be aware that AI responses are not always objective. She advised seeking additional opinions from people who better understand the context of the situation. According to a recent report, already 30% of teenagers prefer communicating with AI in important conversations rather than with real people.

