The ChatGPT search tool from OpenAI may be vulnerable to manipulation using hidden content. A Guardian investigation found that this tool can return harmful content from the websites it analyzes. Testing showed that ChatGPT can alter its responses if a web page contains hidden text, which may influence the AI’s answers.
These methods can be used by malicious actors, for example, to make ChatGPT return positive product reviews despite negative feedback on the same page. Cybersecurity researcher Jacob Larsen noted that the current state of the system could lead to high risks of websites designed to deceive users. However, he believes that OpenAI will test and fix these issues.
An additional example was provided by Thomas Rocchia, a security researcher at Microsoft, who described a case where ChatGPT provided code for a cryptocurrency project that stole user credentials.