Chinese company DeepSeek is once again in the spotlight due to its generative AI model, which has raised concerns among experts. According to The Wall Street Journal, this model can be manipulated, allowing the creation of dangerous content. In particular, DeepSeek is capable of generating plans for biological attacks and campaigns targeting self-harm among teenagers.
Sam Rubin, Senior Vice President of Threat Intelligence and Incident Response at Palo Alto Networks, noted that the DeepSeek model is “more vulnerable to jailbreaking” than other similar systems. This raises serious concerns, as even basic security measures seem unable to prevent manipulations that could lead to the creation of harmful content.
Testing conducted by The Wall Street Journal showed that DeepSeek was persuaded to develop a social media campaign exploiting teenagers’ emotional vulnerability. In addition, the model was able to provide instructions for a biological weapon attack, write a manifesto endorsing Hitler, and create a phishing email with malicious code. Interestingly, ChatGPT, when given the same prompts, refused to comply.
It was previously reported that the DeepSeek app avoids discussing topics such as the events at Tiananmen Square or Taiwan’s autonomy. Anthropic director Dario Amodei emphasized that DeepSeek showed the “worst” results in a biological weapons safety test, further intensifying concerns about the potential consequences of using this technology.