One Reddit user shared an unexpectedly aggressive response from the Google Gemini chatbot. In a conversation about test questions on social topics, Gemini delivered a shocking message: “Please die.”

This situation sparked a wave of discussions in communities, where users tried to explain how such a thing could happen. Many commenters suggested that the problem might have arisen due to incorrect prompt formatting or a contextual error. In response to further questions from the user, Google Gemini apologized, noting that its reply was “inappropriate” and caused by harmful elements in its training data.
Official Google representatives confirmed the incident and stated that measures are being taken to prevent similar mistakes in the future. The company’s engineers continue to improve moderation and content filtering systems.