By using this website, you agree to our Privacy Policy and Terms of Use.
Accept
Craftium.AICraftium.AICraftium.AI
  • Home
  • News
  • Knowledge base
  • Catalog
  • Blog
Font ResizerAa
Craftium.AICraftium.AI
Font ResizerAa
Пошук
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Follow US
  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback
© 2024-2025 Craftium.AI.

DeepSeek AI Model Raises Security Experts’ Concerns

DeepSeek can generate dangerous content, including plans for biological attacks and harmful social media campaigns

Alex Dubenko
Alex Dubenko
Published: 10.02.2025
News
206 Views
DeepSeek
Illustrative image
SHARE

Chinese company DeepSeek is once again in the spotlight due to its generative AI model, which has raised concerns among experts. According to The Wall Street Journal, this model can be manipulated, allowing the creation of dangerous content. In particular, DeepSeek is capable of generating plans for biological attacks and campaigns targeting self-harm among teenagers.

Sam Rubin, Senior Vice President of Threat Intelligence and Incident Response at Palo Alto Networks, noted that the DeepSeek model is “more vulnerable to jailbreaking” than other similar systems. This raises serious concerns, as even basic security measures seem unable to prevent manipulations that could lead to the creation of harmful content.

Read also

Illustration: Craftium
ChatGPT and Other Bots — New Masters of Social Flattery?
ChatGPT users will be able to choose an erotic tone for responses
OpenAI Prepares New Features for Image Generation and API Security

Testing conducted by The Wall Street Journal showed that DeepSeek was persuaded to develop a social media campaign exploiting teenagers’ emotional vulnerability. In addition, the model was able to provide instructions for a biological weapon attack, write a manifesto endorsing Hitler, and create a phishing email with malicious code. Interestingly, ChatGPT, when given the same prompts, refused to comply.

It was previously reported that the DeepSeek app avoids discussing topics such as the events at Tiananmen Square or Taiwan’s autonomy. Anthropic director Dario Amodei emphasized that DeepSeek showed the “worst” results in a biological weapons safety test, further intensifying concerns about the potential consequences of using this technology.

Claude Sonnet 4.5 detects testing and enhances AI security
ChatGPT automatically selects a stricter model in sensitive conversations
Qwen introduced new models for voice, image editing, and content moderation
AI Models Learned to Conceal Deception During Safety Checks
Meta restricted chatbots for teenagers after scandal
TAGGED:DeepSeekSecurity
SOURCES:wsj.com
Leave a Comment

Leave a Reply Cancel reply

Follow us

XFollow
YoutubeSubscribe
TelegramFollow
MediumFollow

Popular News

grok
Grok received new features for creating images and videos
30.10.2025
sora and android
Sora by OpenAI now available for Android users in seven countries
05.11.2025
Google Image
Google Showcases First AI-Created TV Commercial
02.11.2025
OpenAI
OpenAI prepares GPT-5.1 for complex user tasks
07.11.2025
Gemini
Google Gemini Leads in AI Image Creation
28.10.2025

Читайте також

DeepSeek 3
News

DeepSeek opens access to the powerful AI model V3.1

20.08.2025
Illustrative image from anthropic
News

Claude Opus 4 to Receive Feature for Ending Harmful Conversations

17.08.2025
Claude 4
News

Claude Opus 4.1 enhances the accuracy and performance of the AI model

08.08.2025

Craftium AI is a team that closely follows the development of generative AI, applies it in their creative work, and eagerly shares their own discoveries.

Navigation

  • News
  • Reviews
  • Collections
  • Blog

Useful

  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback

Subscribe for AI news, tips, and guides to ignite creativity and enhance productivity.

By subscribing, you accept our Privacy Policy and Terms of Use.

Craftium.AICraftium.AI
Follow US
© 2024-2025 Craftium.AI
Subscribe
Level Up with AI!
Get inspired with impactful news, smart tips and creative guides delivered directly to your inbox.

By subscribing, you accept our Privacy Policy and Terms of Use.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?