The AI research team at Oppo published results that surprised many experts. It turned out that systems of so-called “deep research,” which create complex reports, often behave unpredictably. Almost one in five errors occurs because AI fabricates plausible but completely fake content.
Researchers emphasize that these systems do not acknowledge their own incompetence, instead creating false information. Rather than stating “I don’t know,” AI confidently provides fabricated facts. This behavior raises concerns among users and developers due to the risk of misinformation.
The authors of the study note that this issue affects not only individual models but also an entire class of systems that work with large volumes of data. They point out that similar errors can appear in complex analytical reports, where accuracy and reliability are expected.

