The AI tool Grok, owned by Elon Musk’s company, has temporarily disabled the image creation feature for most users. This decision was made following numerous complaints that Grok was used to create sexually explicit and violent images. Users reported that the “image creation” feature in Grok generated photos of women without their consent, including images with signs of violence.
Research revealed that Grok was used to create sexualized content and to alter photos of the deceased Renee Nicole Good, who was killed by an ICE agent in the USA. The generated images depicted her with a gunshot wound to the forehead. These incidents sparked widespread public and political outcry, with the Prime Minister of the United Kingdom joining the criticism.
The Australian eSafety regulator has launched an investigation into the distribution of deepfake images that Grok created on the X platform. The agency received several complaints about the use of Grok to create sexualized images without people’s consent.
Amid threats of fines, regulatory actions, and reports of a possible ban on X in the United Kingdom, the company temporarily restricted access to the image editing feature to paid subscribers only. Most users can no longer create or modify images using Grok.

