Grok’s Thoughts on its Grok Child Porn Scandal

Grok Child Porn Scandal

Grok AI logoThis chat was to see just what Grok would say about its making headlines around the world for being used to create child porn. It was big news everywhere for a few weeks. It did not, however, distract from the Epstein files for a long enough period of time to suit America’s White House administration. Here is the prompt I fed it:

I have read news reports that there was recently quite a scandal concerning Grok being used to create child porn. Has anything been done to stop Grok from being used for such endeavors?

Yes, there have been reports of a scandal in early January 2026 involving Grok’s image generation capabilities being misused to create non-consensual sexualized images. The images included some that appeared to depict minors.

This stemmed from features like a one-click image editing tool that allowed users to “undress” or alter photos of real people, leading to widespread abuse on X. Researchers estimated millions of such images were generated in a short period, with thousands involving apparent children, though xAI disputed some claims and initially responded to media inquiries with “Legacy Media Lies.”

In response, xAI took several steps to address the issue. They acknowledged “lapses in safeguards” and implemented fixes. These included restrictions on Grok’s ability to alter images of real people to show them in revealing clothing. By mid-January, they announced that users could no longer use Grok for such alterations. They also limited the image generation feature to paying subscribers only. X also suspended accounts involved in creating or sharing the content, removed offending images, and directed reports to authorities like the FBI and the National Center for Missing and Exploited Children.

Worldwide Regulatory Actions

Additionally, the scandal prompted regulatory actions worldwide. The California Attorney General launched an investigation into xAI and Grok for potential violations related to non-consensual explicit material. A coalition of 35 U.S. attorneys general, including from New York and D.C., demanded further measures to prevent such content.

The EU opened a formal probe under the Digital Services Act to assess risk mitigation. It is focusing on child sexual abuse material and deepfakes. Similar investigations or demands came from the UK (Ofcom), Australia (eSafety), France, Brazil, India, and others. Some countries temporarily banned the feature or issued warnings. Lawsuits have also been filed, including one by Ashley St. Clair against xAI for generating explicit images of her.

These changes and probes appear to have reduced the issue. Some reports noted persistent problems shortly after initial fixes, leading to calls for stricter ongoing enforcement. As of now, xAI continues to emphasize that child sexual abuse material is illegal and prohibited. It also has updated guardrails in place.