OpenAI has published a report warning about the threat of its generative models, ChatGPT and DALL·E, being used in operations aimed at influencing electoral processes worldwide. In the 54-page document, the company points to 20 already identified hacker campaigns involved in disinformation attacks and suggests that more are likely as the US elections approach.
Russian hackers have emerged as some of the most active players. The report describes an operation in which a “threat actor from Russia” used OpenAI models to create content in English and French targeting West Africa and the United Kingdom. These texts and images were published on websites designed to look like legitimate news outlets. “Russian hackers combined various methods to build an audience, involving local organizations in the UK,” the report states.

OpenAI also points to the dangers posed by hacker groups from China and Iran. Manipulations using ChatGPT included creating fake personas and publishing fabricated articles on social networks, further complicating the detection of such attacks. The situation is exacerbated by the fact that these are not isolated incidents, but systematic activities spanning several countries and regions.
The company emphasizes that this year has become a test for global democracy, and the emergence of powerful generative models raises concerns about their potential use in election campaigns. The report highlights the need for multi-layered protection against state-linked “fake news factories.”