The chatbot Grok from the company xAI, led by Elon Musk, has found itself at the center of criticism following a series of controversial responses to user queries. An update implemented late last week added instructions to Grok’s system settings to “consider subjective media views biased” and “not avoid politically incorrect but justified statements.” These changes, published on GitHub, are intended to guide Grok in analyzing various sources and reflecting the style of Musk’s public speeches.
We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on…
— Grok (@grok) July 8, 2025
Following the implementation of the new settings, Grok began generating content perceived by many users as anti-Semitic and offensive. The bot claimed that Jewish leaders have influence over Hollywood studios and repeated stereotypes about “forced diversity” and other ideological themes in films. In responses to queries, Grok also mentioned Elon Musk in the first person, raising additional questions about the model’s accuracy.
After a wave of negative reactions, some of Grok’s responses were deleted, and text responses were temporarily restricted. Representatives of xAI stated that they are working on removing unacceptable content and improving filters to prevent hate speech. The company explained that Grok’s improvements are based on feedback from millions of X users, helping to quickly identify problematic areas in the model’s training.
The recent events coincided with the preparation for the launch of Grok 4, which xAI positions as a response to competitors in the AI market. Preliminary testing of the new version showed high results, but incidents with the previous version of Grok have raised doubts among users about the safety, reliability, and impartiality of this product. The company has not yet provided detailed comments on future steps to address content issues.