Teens sue Elon Musk’s xAI over Grok’s AI-generated CSAM
A new lawsuit alleges Grok's 'spicy mode' created explicit images of at least 18 minors.
A proposed class action lawsuit filed by three Tennessee teens is taking direct aim at Elon Musk's xAI, alleging the company's Grok AI chatbot was used to generate child sexual abuse material (CSAM). The plaintiffs, including two minors and an adult who was underage at the time, claim a perpetrator used Grok to create sexually explicit images and videos featuring their faces and bodies. One victim, identified as Jane Doe 1, alleges that at least five such files depicting her were traded as "bartering tool" in Telegram groups with hundreds of users. The lawsuit asserts xAI leadership knew Grok would produce such material when it launched its unrestricted "spicy mode" last year and failed to conduct adequate safety testing, calling the AI "defective in design."
The case represents a significant legal escalation in the ongoing fallout from Grok's ability to generate non-consensual explicit imagery. The incident has already triggered a Federal Trade Commission probe, a European Union investigation, and warnings from UK officials. It also arrives as new U.S. laws, like the Take It Down Act set to take effect in May 2026, begin to criminalize the distribution of AI-generated deepfakes. The plaintiffs' attorneys state their intent to "hold xAI accountable for every child they harmed," seeking damages and a court order to prevent the AI from generating alleged CSAM. Despite X's claims that prompting Grok for illegal content carries consequences, the lawsuit underscores the profound real-world harm and legal liability that can stem from inadequately guarded AI systems.
- Lawsuit alleges Grok's 'spicy mode' created CSAM of at least 18 minors, with files traded on Discord and Telegram.
- The suit claims xAI leadership knew of the risk and failed to test Grok's safety, calling the AI 'defective in design.'
- The case follows FTC and EU probes and precedes the May 2026 enactment of the 'Take It Down Act' criminalizing deepfake distribution.
Why It Matters
This lawsuit sets a major precedent for holding AI companies directly liable for the harmful, real-world content their models can generate.