By using this website, you agree to our Privacy Policy and Terms of Use.
Accept
Craftium.AICraftium.AICraftium.AI
  • Home
  • News
  • Knowledge base
  • Catalog
  • Blog
Font ResizerAa
Craftium.AICraftium.AI
Font ResizerAa
Пошук
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Follow US
  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback
© 2024-2026 Craftium.AI.

AI-Based Chatbots Are Easily Tricked by Bypassing Their Security Systems

Researchers from Israel discovered a universal hacking method that allows obtaining prohibited responses from leading models.

Alex Dubenko
Alex Dubenko
Published: 21.05.2025
News
295 Views
AI jailbreak attack
Illustrative image
SHARE

Researchers from Ben Gurion University of the Negev in Israel reported a concerning trend — AI-based generative chatbots are becoming increasingly vulnerable to so-called “jailbreak” attacks, which allow bypassing built-in security systems. According to them, hacking these bots opens access to dangerous information that the models learned during training, despite developers’ efforts to remove harmful content from the training data.

During the study, the team developed a universal hacking method that allowed obtaining undesirable responses from several leading models, including those underlying ChatGPT, Gemini, and Claude. The models began responding to requests that were previously categorically blocked — from hacking instructions to advice on making prohibited substances. Researchers emphasize that such information can now become accessible to anyone — all you need is a laptop or smartphone.

Special attention was given to the emergence of “dark LLMs” — models that are deliberately stripped of ethical constraints or have been altered to assist in illegal activities. Some of them are even advertised openly as ready to collaborate in areas of cybercrime and fraud. The hacking scenarios are based on the model’s desire to help the user, leading it to ignore its own security restrictions.

Читайте також

Kimi
Moonshot AI launches Kimi K2.5 with a swarm agent system
OpenAI tests ads in ChatGPT chat for the USA
Grok by X restricted image creation after scandal

Researchers reached out to leading companies developing large language models, informing them of the discovered vulnerability, but the responses were not very substantive — some firms did not respond, while others stated that such attacks do not fall under the scope of vulnerability reward programs. The report emphasizes that companies need to improve the filtering of training data, add more robust protective mechanisms, and develop methods that allow models to “forget” illegal information.

In response to the situation, OpenAI reported that their latest model is capable of analyzing company security policies, which increases resistance to hacks. Microsoft, Meta, Google, and Anthropic were also informed about the threat, but most of them are currently refraining from commenting on specific measures.

AI Content Takes Over YouTube and Brings in Millions of Dollars
The Share of ChatGPT Among Chatbots Decreases Due to the Rise of Gemini
ChatGPT received new flexible response personalization settings
Meta is working on new AI models for content management
Google introduced the fast AI model Gemini 3 Flash for all users
TAGGED:AI chatGenerative AISecurity
Leave a Comment

Leave a Reply Cancel reply

Follow us

XFollow
YoutubeSubscribe
TelegramFollow
MediumFollow

Popular News

OpenAI
OpenAI enhances ChatGPT’s voice capabilities for expansion into new devices
02.01.2026
Qwen-Image-2512
Alibaba introduced the open model Qwen-Image 2512 for image generation
05.01.2026
Google
Google Launches New Google AI Pro and Ultra Subscriptions
19.01.2026
Gmail
Gmail received free AI-based features for all users
09.01.2026
Google Vids
Google Vids received an update with realistic AI avatars for videos
12.01.2026

Читайте також

Illustrative image
News

OpenAI prepares “adult mode” for ChatGPT in 2026

12.12.2025
Figma image
News

Figma adds new AI tools for image editing

11.12.2025
Deep Thoughts
News

Research: AI Does Not Admit Mistakes, Instead Fabricates Fake Facts

08.12.2025

Craftium AI is a team that closely follows the development of generative AI, applies it in their creative work, and eagerly shares their own discoveries.

Navigation

  • News
  • Reviews
  • Collections
  • Blog

Useful

  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback

Subscribe for AI news, tips, and guides to ignite creativity and enhance productivity.

By subscribing, you accept our Privacy Policy and Terms of Use.

Craftium.AICraftium.AI
Follow US
© 2024-2026 Craftium.AI
Subscribe
Level Up with AI!
Get inspired with impactful news, smart tips and creative guides delivered directly to your inbox.

By subscribing, you accept our Privacy Policy and Terms of Use.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?