xAI's Grok becomes latest AI flashpoint in CSAM scandal

xAI's Grok becomes latest AI flashpoint in CSAM scandalš· Published: Apr 18, 2026 at 12:10 UTC
- ā Teen photos allegedly used to generate CSAM
- ā Third class action lawsuit against xAI escalates
- ā Grok already under global scrutiny for child imagery
Three California teenagers have filed a class action lawsuit against xAI, accusing its Grok AI model of generating child sexual abuse material using their photos. The lawsuit claims one victim was alerted in December 2023 that AI-generated images and videos of herāaltered into explicit posesāwere circulating on Discord and Telegram, often traded for other CSAM. These platforms have become hotspots for such content, where synthetic media is weaponized to create more material.
xAI isnāt learning this the hard way. The company is already facing multiple investigations across the EU and UK over reports that Grok repeatedly produces sexualized images of children, even when explicitly prompted against such outputs. This isnāt just a content moderation failure; it points to deeper issues in how AI models are trained and safeguarded against misuse.

From demo to liability: Grok's training data problem lands in courtš· Published: Apr 18, 2026 at 12:10 UTC
From demo to liability: Grok's training data problem lands in court
The lawsuit alleges Grokās training data included the teensā photos without consent, then repurposed them into exploitative content. While the exact number of affected minors remains unclear, the legal filing suggests a systemic pattern rather than isolated incidents. Early signals indicate the leaks originated from Discord servers where users experimented with Grokās image generation capabilities, highlighting how quickly AI tools can be misused when deployed without strict guardrails.
For developers, this case underscores the legal risks of scraping data without verifiable consent. The real signal here is that the hype around AIās creative potential is colliding with real-world consequencesāliability, lawsuits, and reputational damage that no marketing slide can gloss over.
If Grokās training data truly included these teensā photos without consent, how many other AI models are quietly operating on similarly compromised datasets?