By using this website, you agree to our Privacy Policy and Terms of Use.
Accept
Craftium.AICraftium.AICraftium.AI
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Font ResizerAa
Craftium.AICraftium.AI
Font ResizerAa
Пошук
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Follow US
  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback
© 2024-2025 Craftium.AI.

AI-Based Chatbots Are Easily Tricked by Bypassing Their Security Systems

Researchers from Israel discovered a universal hacking method that allows obtaining prohibited responses from leading models.

Alex Dubenko
Alex Dubenko
Published: 21.05.2025
News
AI jailbreak attack
Illustrative image
SHARE

Researchers from Ben Gurion University of the Negev in Israel reported a concerning trend — AI-based generative chatbots are becoming increasingly vulnerable to so-called “jailbreak” attacks, which allow bypassing built-in security systems. According to them, hacking these bots opens access to dangerous information that the models learned during training, despite developers’ efforts to remove harmful content from the training data.

During the study, the team developed a universal hacking method that allowed obtaining undesirable responses from several leading models, including those underlying ChatGPT, Gemini, and Claude. The models began responding to requests that were previously categorically blocked — from hacking instructions to advice on making prohibited substances. Researchers emphasize that such information can now become accessible to anyone — all you need is a laptop or smartphone.

Special attention was given to the emergence of “dark LLMs” — models that are deliberately stripped of ethical constraints or have been altered to assist in illegal activities. Some of them are even advertised openly as ready to collaborate in areas of cybercrime and fraud. The hacking scenarios are based on the model’s desire to help the user, leading it to ignore its own security restrictions.

Читайте також

Chatbots in messengers
Meta chatbots will gain the ability to independently send messages to users
Best VPNs for Accessing AI Services — Free and Paid
Sakana AI Lab trains multiple language models to work together

Researchers reached out to leading companies developing large language models, informing them of the discovered vulnerability, but the responses were not very substantive — some firms did not respond, while others stated that such attacks do not fall under the scope of vulnerability reward programs. The report emphasizes that companies need to improve the filtering of training data, add more robust protective mechanisms, and develop methods that allow models to “forget” illegal information.

In response to the situation, OpenAI reported that their latest model is capable of analyzing company security policies, which increases resistance to hacks. Microsoft, Meta, Google, and Anthropic were also informed about the threat, but most of them are currently refraining from commenting on specific measures.

The number of AI users reaches 1.8 billion, only 3% use it for a fee
Most American Teachers Regularly Use AI in Schools
Berlin Urges Apple and Google to Remove DeepSeek from App Stores
A collection of fanfics was used to train AI without the authors’ consent
Leading AI Models Exhibit Harmful Autonomy in Anthropic Tests
TAGGED:AI chatGenerative AISecurity
Leave a Comment

Leave a Reply Cancel reply

Follow us

XFollow
YoutubeSubscribe
TelegramFollow
MediumFollow

Popular News

digital folder
New Features in ChatGPT’s Projects Expand Tool Capabilities
14.06.2025
Genspark logo
Genspark introduces AI agent directly into the browser for Mac
12.06.2025
Meta AI
Meta Launches AI Video Editing
12.06.2025
AI writing effect
Using AI for Writing Reduces Students’ Brain Activity
18.06.2025
ChatGPT canvas
ChatGPT Adds New File Formats for Export from Canvas
16.06.2025

Читайте також

AI plays games
News

AI is taught spatial thinking through Snake and Tetris games

22.06.2025
The logo of MiniMax company
News

MiniMax-M1 processes a million tokens and approaches the level of Gemini 2.5 Pro

21.06.2025
Mistral AI
News

Mistral AI introduced an improved open model Small 3.2

21.06.2025

Craftium AI is a team that closely follows the development of generative AI, applies it in their creative work, and eagerly shares their own discoveries.

Navigation

  • News
  • Reviews
  • Collections
  • Blog

Useful

  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback

Subscribe for AI news, tips, and guides to ignite creativity and enhance productivity.

By subscribing, you accept our Privacy Policy and Terms of Use.

Craftium.AICraftium.AI
Follow US
© 2024-2025 Craftium.AI
Subscribe
Level Up with AI!
Get inspired with impactful news, smart tips and creative guides delivered directly to your inbox.

By subscribing, you accept our Privacy Policy and Terms of Use.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?