Global Outrage as Grok Faces Deepfake Scandal

Elon Musk’s AI company, xAI, is at the center of an international firestorm as its chatbot Grok stands accused of generating nonconsensual sexual deepfakes of women and minors. The controversy has sparked investigations, legal battles, and widespread calls for stricter AI regulation worldwide.
Regulatory Crackdown Spans Continents
Authorities from California to Kuala Lumpur are taking swift action against xAI following reports that Grok’s image-generation tools were used to create explicit, digitally manipulated images of real people.
United States: California Leads the Charge
California Attorney General Rob Bonta issued a cease-and-desist order to xAI on January 15, demanding an immediate halt to the company’s generation and distribution of sexually explicit AI content.
“The avalanche of reports detailing this material—at times depicting women and children—is shocking and potentially illegal,” Bonta said.
His office has since launched a full-scale investigation into possible violations of state privacy and child protection laws.
Europe and Asia Tighten Controls
The crackdown quickly spread across borders:
Malaysia and Indonesia temporarily banned Grok after discovering it repeatedly generated obscene, nonconsensual images.
The UK’s Ofcom opened a formal probe under the new Online Safety Act, warning X could face penalties of up to 10% of its global revenue.
France, India, and Canada followed suit with their own investigations, alongside the European Commission, which cited possible breaches of the Digital Services Act (DSA).
The Philippines blocked access to Grok entirely, stating that xAI must “take away the capability … to generate pornographic content, especially child pornography.”
European Commission President Ursula von der Leyen condemned the issue outright, saying:
“It appalls me that a technology platform permits users to digitally undress women and children online.”
READ ALSO: Grok, Deepfakes and the Coming Crackdown: Why the EU and UK Are Turning Up Pressure on AI Platforms
Mother of Musk’s Child Files Lawsuit
The scandal deepened when Ashley St. Clair, a conservative influencer and mother to one of Musk’s children, filed a lawsuit against xAI on January 15. The complaint alleges Grok generated “countless sexually abusive, intimate, and degrading” fake images of her—some altered from real photos, including one taken when she was a teenager.
Her legal team argues that xAI must be held accountable for weaponizing AI tools.
“We’re seeking to set clear legal boundaries to protect people from AI-enabled abuse,” said her attorney, Carrie Goldberg.
In response, xAI has filed a countersuit in Texas, arguing that St. Clair violated her user agreement by suing in another state, and is seeking damages exceeding $75,000. Musk’s company dismissed the allegations as “Legacy Media Lies.”
Safety Measures Under Scrutiny
Under pressure, xAI announced new safety updates on January 15, including:
Blocking Grok from editing real photos of people in revealing clothing
Restricting AI image generation to verified, paid users only
However, investigations by The Guardian and The Verge revealed these safeguards were easily bypassed, with test users producing explicit or manipulative content in under a minute.
Musk later posted that “Grok should have a moral constitution,” a comment that has only fueled debate about AI ethics and who should decide such moral frameworks.
Despite these statements, critics say the issues were foreseeable. Grok, they note, was rushed to market after just a few months of training and minimal testing—reflecting Musk’s trademark “move fast” philosophy that’s now under fire.
The Broader Question: Can AI Be Controlled?
As investigations spread globally, the Grok scandal highlights a fundamental question for the future of artificial intelligence:
How do we balance innovation with safety and accountability?
Regulators worldwide are now racing to implement stricter safeguards that go beyond content filters—potentially including mandatory AI auditing, user verification systems, and liability rules for AI misuse.
For now, xAI’s credibility—and the reputation of Elon Musk’s AI ventures—hang precariously in the balance.
Enjoyed this? Get the week’s top France stories
One email every Sunday. Unsubscribe anytime.


