By using this website, you agree to our Privacy Policy and Terms of Use.
Accept
Craftium.AICraftium.AICraftium.AI
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Font ResizerAa
Craftium.AICraftium.AI
Font ResizerAa
Пошук
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Follow US
  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback
© 2024-2025 Craftium.AI.

AI-Based Chatbots Are Easily Tricked by Bypassing Their Security Systems

Researchers from Israel discovered a universal hacking method that allows obtaining prohibited responses from leading models.

Alex Dubenko
Alex Dubenko
Published: 21.05.2025
News
AI jailbreak attack
Illustrative image
SHARE

Researchers from Ben Gurion University of the Negev in Israel reported a concerning trend — AI-based generative chatbots are becoming increasingly vulnerable to so-called “jailbreak” attacks, which allow bypassing built-in security systems. According to them, hacking these bots opens access to dangerous information that the models learned during training, despite developers’ efforts to remove harmful content from the training data.

During the study, the team developed a universal hacking method that allowed obtaining undesirable responses from several leading models, including those underlying ChatGPT, Gemini, and Claude. The models began responding to requests that were previously categorically blocked — from hacking instructions to advice on making prohibited substances. Researchers emphasize that such information can now become accessible to anyone — all you need is a laptop or smartphone.

Special attention was given to the emergence of “dark LLMs” — models that are deliberately stripped of ethical constraints or have been altered to assist in illegal activities. Some of them are even advertised openly as ready to collaborate in areas of cybercrime and fraud. The hacking scenarios are based on the model’s desire to help the user, leading it to ignore its own security restrictions.

Читайте також

Image from mistral
Le Chat receives platform integrations and memory feature
Meta restricted chatbots for teenagers after scandal
A new report shows changes among AI chatbot leaders

Researchers reached out to leading companies developing large language models, informing them of the discovered vulnerability, but the responses were not very substantive — some firms did not respond, while others stated that such attacks do not fall under the scope of vulnerability reward programs. The report emphasizes that companies need to improve the filtering of training data, add more robust protective mechanisms, and develop methods that allow models to “forget” illegal information.

In response to the situation, OpenAI reported that their latest model is capable of analyzing company security policies, which increases resistance to hacks. Microsoft, Meta, Google, and Anthropic were also informed about the threat, but most of them are currently refraining from commenting on specific measures.

Anthropic changes user data usage policy in Claude
YouTube Shorts videos are processed by AI without the consent of their authors
DeepSeek opens access to the powerful AI model V3.1
GPT-5’s Responses to Become Warmer and Friendlier Again
Claude Opus 4 to Receive Feature for Ending Harmful Conversations
TAGGED:AI chatGenerative AISecurity
Leave a Comment

Leave a Reply Cancel reply

Follow us

XFollow
YoutubeSubscribe
TelegramFollow
MediumFollow

Popular News

Hunyuan World Model
The lightweight version of Hunyuan World Model 1.0 is now more accessible to users
16.08.2025
Snap Image
Imagine Lens opens new creative possibilities in Snapchat
06.09.2025
AI surrounded by ads
OpenAI Considers Adding Ads to ChatGPT Chats
15.08.2025
Image from Google's website
Gemini will remember user preferences in Google chats
14.08.2025
Gemini
Google Prepares New Image Generation Features in Gemini
15.08.2025

Читайте також

ChatGPT
News

ChatGPT users received expanded AI model selection settings

13.08.2025
Claude 4
News

Claude Opus 4.1 enhances the accuracy and performance of the AI model

08.08.2025
Llama and AI
News

Ollama introduced a convenient app for running local AI models

05.08.2025

Craftium AI is a team that closely follows the development of generative AI, applies it in their creative work, and eagerly shares their own discoveries.

Navigation

  • News
  • Reviews
  • Collections
  • Blog

Useful

  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback

Subscribe for AI news, tips, and guides to ignite creativity and enhance productivity.

By subscribing, you accept our Privacy Policy and Terms of Use.

Craftium.AICraftium.AI
Follow US
© 2024-2025 Craftium.AI
Subscribe
Level Up with AI!
Get inspired with impactful news, smart tips and creative guides delivered directly to your inbox.

By subscribing, you accept our Privacy Policy and Terms of Use.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?