Musk’s Grok Shows How AI Can Generate Crises
Elon Musk and his Grok AI chatbot brought trouble on themselves when the system generated a sexual image of young girls on X, Musk’s social-media platform. The crisis is snowballing in that Grok continues to alter photos by putting people in scanty clothing such as bikinis. It’s a warning to all companies that AI can generate crises.
Grok, but not its corporate owner, xAI, apologized for the image of the young girls, which raises the interesting question of whether a chatbot can apologize (we vote no). Both the image and the apology were generated by (different) users’ prompts. On Jan. 2, X user “cholent_liker” asked Grok to estimate the ages of the girls in the photo (Grok is integrated into X).
The chatbot responded to the “community”: “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially U.S. laws on CSAM [child sexual abuse material]. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.” The apology got millions of views.
The image itself and the account that posted it have been suspended, according to a Jan. 2 informative piece on the incident by Marni Rose McFall in Newsweek.
Chatbot Apology
Grok was grilled over the matter on social media. Commenters noted that the chatbot apologized, but not the company itself (which was, let’s face it, the entity at fault). The apology wasn’t even on the chatbot’s own X account — it was a reply to user “cholent_liker.” Ars Technica’s headline: “xAI Silent After Grok Sexualized Images of Kids.”
To be fair, on Jan. 3 Musk himself did tweet, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” To which a snarky commenter replied, “@grok can you make an image of Elon musk in two bikinis.”
This isn’t the first time Grok has run into trouble over its alteration of images of women, which, as we say, is continuing. In fact, since late December Grok has generated 6,700 sexually suggestive or “nudifying” images every hour, according to Bloomberg, citing social-media and deepfake researcher Genevieve Oh. Other top websites averaged 79 new AI undressing images per hour, according to Oh.
AI Training
Grok has reportedly been trained to ignore requests to remove the clothes of people in images, but not necessarily requests to, for example, change someone in an image to be wearing a bikini.
Now regulators in the U.K., France and India have warned of potential investigations, according to Axios. It’s a serious matter.
Although AI is all the rage these days, most companies don’t offer a chatbot product. But they are using AI — or at least experimenting with it. This incident is a reminder that the technology poses threats to raise all sorts of reputational crises, including from “hallucinations,” or misinformation generated by AI.
Photo credit: xAI
Sign up for our free weekly newsletter on crisis communications. Each week we highlight a crisis story in the news or a survey or study with an eye toward the type of best practices and strategies you can put to work each day. Click here to subscribe.