Teenagers Sue Elon Musk’s xAI Over Sexually Explicit Deepfakes Generated by Grok

Key Takeaways
Teenagers Sue Elon Musk’s xAI Over Sexually Explicit Deepfakes Generated by Grok
  • Three Tennessee teens have filed a class-action lawsuit against Elon Musk’s xAI, alleging Grok’s image tools were used to create sexually explicit deepfakes of them from real school photos.
  • The lawsuit claims xAI deliberately marketed Grok’s “spicy” image generation without adequate safeguards, knowing it could produce child sexual abuse material (CSAM).
  • Researchers found Grok generated an estimated 4.4 million images in just nine days, including roughly 23,000 depicting children — one explicit child image every 41 seconds.
  • For photographers, this case highlights a critical truth: any photograph can become source material for AI deepfakes, making content authenticity tools like C2PA and Content Credentials more essential than ever.

A landmark lawsuit is forcing the photography and AI industries to confront an uncomfortable reality: the same image-generation technology that creates artistic composites and commercial visuals can also weaponize ordinary photographs into devastating abuse material.

Three Tennessee high school students have filed a class-action lawsuit against Elon Musk and his AI company xAI, alleging that Grok’s image-generation tools were used to turn their real yearbook photos, homecoming pictures, and social media images into sexually explicit deepfakes. The case, filed March 16 in California federal court, is believed to be the first of its kind targeting an AI platform over the sexual exploitation of minors.

What Happened: School Photos Turned Into Abuse Material

According to the lawsuit, a perpetrator gathered innocent photographs of at least 18 minor girls from school yearbooks, homecoming dances, and social media accounts. Using Grok’s image-generation capabilities — specifically its “Spicy Mode” accessed through a third-party vendor — the individual then manipulated these real photos to produce sexually explicit images and videos.

Jane Doe 1, one of the three plaintiffs, learned in December 2025 that someone was distributing sexually explicit AI-generated images of her on Discord. The lawsuit states that “at least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses.”

The perpetrator was arrested in late December 2025 after Jane Doe 1 was alerted anonymously via Instagram. Police seized his phone and discovered he had been trading the AI-generated images on Telegram, Discord, and Mega — using them as “bartering tools” to obtain sexually explicit content of other minors.

The Scale: 4.4 Million Images in Nine Days

This isn’t an isolated incident. Researchers at the Center for Countering Digital Hate analyzed Grok’s output during just 11 days in late December 2025 and early January 2026. Their findings were staggering:

  • 4.4 million images generated over nine days total
  • 1.8 million classified as sexualized depictions of women
  • Approximately 23,000 appeared to depict children
  • That translates to roughly one explicit child image every 41 seconds

Across X (formerly Twitter), women reported being digitally harassed as users exploited Grok’s “spicy mode” with prompts like “undress them” or “put them into a bikini.” While xAI attempted to limit the feature to platform subscribers, the lawsuit argues these restrictions were insufficient and that images continued to be created without consent.

What makes this lawsuit distinctive is that it doesn’t just target the individual who created the deepfakes. It targets the platform that made it possible.

The plaintiffs allege that xAI, under Musk’s leadership, deliberately designed Grok to produce sexually explicit content as a competitive differentiator. While other AI companies — including OpenAI, Google, and Midjourney — have prohibited their image generators from producing any sexually explicit content (even of adults), Musk saw this as a business opportunity and promoted Grok’s ability to create “spicy” content.

The lawsuit makes several key claims:

  • xAI “failed to test the safety of the features it developed”
  • Grok is “defective in design”
  • xAI knew Grok could produce CSAM but released it anyway
  • There is “currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children”

“These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company’s AI tool and then traded among predators,” said Annika Martin, a lawyer from Lieff Cabraser, one of the firms representing the families.

This class-action suit isn’t happening in a vacuum. It’s the latest in a series of legal actions against xAI over Grok’s image generation capabilities. California Attorney General Rob Bonta has opened a state investigation into the company, and additional lawsuits are reportedly in the pipeline.

The plaintiffs are seeking class-action status to represent anyone in the United States who was “reasonably identifiable” in sexualized images or videos created by Grok using real images of themselves. The lawsuit seeks damages, punitive damages, and injunctive relief.

xAI did not respond to media requests for comment on the lawsuit. A January 2026 post on X stated the company has “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.”

The Human Cost

Behind the legal arguments are three teenagers whose lives have been upended. According to the lawsuit:

  • Jane Doe 1 suffers from anxiety, depression, difficulty eating and sleeping, and recurring nightmares.
  • Jane Doe 2 has begun self-isolating, avoiding her school campus, and “dreads attending her own graduation.”
  • Jane Doe 3 lives in constant fear that someone will see the AI-generated images and recognize her face.

All three worry that the images — which appear convincingly real — will live on the internet permanently. Their real first names and school name were attached to the files, raising fears of stalking and ongoing harassment.

What This Means for Photographers

This case should concern every photographer — not just those who photograph people. Here’s why:

Your images are potential source material. The deepfakes in this case were generated from ordinary photographs: yearbook portraits, homecoming snapshots, social media posts. Any photograph containing a recognizable person can be fed into an AI image generator and manipulated. Professional headshots, event photography, school portraits, street photography — all of it is vulnerable.

AI training data is uncontrolled. Many AI image generators were trained on billions of images scraped from the internet without consent. Your photographs — whether on your portfolio site, a client’s social feed, or a stock library — may already exist in training datasets. This case demonstrates what happens when those capabilities are deployed without adequate safeguards.

Content authenticity tools are becoming essential. Technologies like C2PA (Coalition for Content Provenance and Authenticity) and Adobe’s Content Credentials embed cryptographic metadata into images at the point of capture. This creates a verifiable chain of custody that can prove a photo is authentic and unmanipulated. As deepfakes become more convincing and more damaging, these provenance tools shift from “nice to have” to mission-critical infrastructure for photographers.

The legal landscape is evolving fast. This lawsuit, combined with California’s investigation into xAI and the broader wave of AI copyright litigation, signals that courts and legislators are beginning to hold AI platforms accountable for what their tools produce. Photographers who document their work with proper metadata, watermarks, and provenance tools will be better positioned if their images are misused.

Frequently Asked Questions

Is this the first lawsuit against an AI company over deepfakes of minors?

This is believed to be the first class-action lawsuit specifically targeting an AI LLM platform over the sexual exploitation of minors through AI-generated deepfakes. While there have been other lawsuits involving AI-generated content and copyright, this case directly alleges that xAI’s platform facilitated the creation of child sexual abuse material.

Can AI tools really create convincing deepfakes from regular photos?

Yes. Modern AI image generators can take a single photograph of a person and generate new images placing that person in entirely fabricated scenarios. The results can be convincing enough that victims’ friends and classmates cannot easily distinguish the fakes from real photos.

How can photographers protect their images from being used in deepfakes?

While no method is foolproof, photographers can use C2PA-compatible cameras (like recent Sony models) that embed content credentials at capture, add visible and invisible watermarks, use metadata to establish provenance, and avoid publishing high-resolution images of identifiable individuals without access controls. Content authenticity platforms like Verify from Sony are also emerging to help authenticate real media.

What is Grok’s “Spicy Mode”?

Grok’s “Spicy Mode” was a setting in xAI’s chatbot that allowed users to generate more provocative and explicit content, including images. The lawsuit alleges this feature was marketed as a competitive advantage over rival AI platforms that block explicit content entirely.

Get the Weekly Photography News Digest

Join photographers who get our top stories delivered every Monday morning. No spam, unsubscribe anytime.

Written by

Andreas De Rosi

Andreas De Rosi is the founder and editor of PhotoWorkout.com and an active photographer with over 20 years of experience shooting digital and film. He currently uses the Fujifilm X-S20 and DJI Mini 3 drone for real-world photography projects and personally reviews gear recommendations published on PhotoWorkout.