A Newsguard study revealed that the most popular AI-based chatbots have become twice as likely to spread false information about current news. The share of incorrect responses increased from 18 to 35 percent over the year. Instead of refusing to answer complex queries, the models now provide responses 100 percent of the time, even if they repeat misinformation.
Particularly striking is the decline in accuracy of Perplexity, which a year ago debunked all fakes but now spreads them in almost half of the cases. The Inflection model proved to be the least accurate, spreading false claims in over 56 percent of cases. ChatGPT and Meta repeated fakes in 40 percent of cases, while Claude and Gemini showed better results with error rates of 10 and 16.67 percent, respectively.
Newsguard recorded that Russian disinformation networks are deliberately influencing AI models. In August 2025, six out of ten chatbots repeated a fabricated statement about the head of the Moldovan parliament, Igor Grosu, originating from a network of pro-Kremlin sites.
The addition of the “real-time search” feature was supposed to improve the relevance of responses but instead made the models vulnerable to fake news and propaganda. AI bots began sourcing information from unverified sources and confusing reputable publications with pages of propaganda organizations masquerading as legitimate media.
OpenAI acknowledges that language models can always fabricate facts, as they generate responses based on probability rather than truthfulness. The company promises to work on mechanisms that will help signal uncertainty in responses, but the issue of repeating fakes remains unresolved.