First: This is not your child's fault.
AI-generated sexual imagery is a form of abuse. It is created deliberately to harm, humiliate, and control. The images are fake — your child's real body was never involved — but the harm is real. Shame is the weapon this abuse depends on. Your first job is to remove it: tell your child clearly, immediately, and as many times as they need to hear it — this is not their fault, this does not define them, and you are going to handle this.
Step 1 — Do NOT ask your child to find or look at the images
You are going to handle the documentation and reporting. Keep your child away from the images entirely. Re-exposure causes additional trauma, and there is nothing they need to do in this process that requires seeing them again.
If they've already seen the images, acknowledge that directly: "I know you saw it. You don't need to look again. I'll take care of the rest."
This step is about protection first. Investigation second.
Step 2 — Document
Before reporting to any platform, document what you can find — so you have evidence even after removal.
- Screenshot or screen-record the location where the image appeared (do not save the image itself to personal devices if it involves a minor — this is legally complicated; document the location and existence, not the content)
- Note the platform, URL, username of the account that posted it, and timestamp
- Note who may have seen or shared it
- Write down when you first became aware and how
If the image exists in multiple places, document each location separately. This record will be essential for platform reports, law enforcement, and legal action.
Step 3 — Report to NCMEC
National Center for Missing & Exploited Children CyberTipline: cybertipline.org
For any AI-generated sexual imagery involving a minor — regardless of whether it looks "real" or is obviously artificial — report to NCMEC's CyberTipline immediately. This is the federally designated clearinghouse for child sexual exploitation material online. Reporting here triggers federal-level response and creates a legal record.
What to include in your report:
- The platform where you found the image
- URLs, usernames, timestamps
- That the imagery is AI-generated and involves a minor
- Your contact information
NCMEC will route your report to the appropriate law enforcement agencies. This step does not obligate you to take further legal action, but it creates the record that enables it.
Step 4 — Contact Cyber Civil Rights Initiative
Crisis line: 844-878-2274
The Cyber Civil Rights Initiative (CCRI) has specific expertise in AI-generated intimate imagery. They can:
- Walk you through platform removal requests for specific services
- Advise on legal options in your state
- Provide emotional support resources for your child and family
- Help with evidence preservation
Call them. They've handled this before. You don't have to figure out the platform removal process alone.
Step 5 — Request removal from platforms
Every major platform — Instagram, Snapchat, TikTok, Reddit, X (Twitter), Discord — has a process for non-consensual intimate imagery (NCII). When you submit a removal request:
- Be explicit: state that the image is AI-generated and involves a minor
- Reference the specific platform policy on NCII or CSAM
- Include your documentation: URL, username, date
- Note that you have filed a report with NCMEC
If the first request fails or gets no response within 24–48 hours, escalate. Most platforms have an escalation path through their Trust & Safety teams. CCRI can help you navigate this if you're stuck.
Some platforms also participate in the StopNCII hash-matching program, which helps prevent the image from being re-uploaded after removal.
Step 6 — Consider law enforcement
Several states have passed laws specifically criminalizing deepfake sexual imagery — and more are passing them now. In many jurisdictions, creating or distributing AI-generated sexual imagery of a minor is a serious crime.
Your local police may be able to act. Bring:
- Your documentation (locations, usernames, timestamps)
- Your NCMEC report confirmation number
- A written summary of events
You can also report directly to the FBI (tips.fbi.gov) if there is interstate distribution or if local law enforcement doesn't act. Federal law on child sexual exploitation material does not require the imagery to be "real" — AI-generated images involving minors can fall within its scope.
Step 7 — Support your child
Your child needs to hear specific things — not general reassurance, but specific truths:
"The images are not real." AI generated this. It does not show your real body. It never happened.
"This is not your fault." Nothing you did created this. This was done to you.
"You are handling it." They should not have to carry this. You are the one taking action. Their job is to let you.
"Their body and identity are theirs." The images don't change who they are, what their body is, or what they're worth.
In the days and weeks after, watch for signs of significant distress: avoidance of school or peers, changes in sleep or appetite, withdrawal, or any mention of self-harm. AI image abuse causes real psychological harm. Early support — from a school counselor or a therapist who specializes in adolescent trauma — can make a significant difference.
Crisis resources
- NCMEC CyberTipline: cybertipline.org
- Cyber Civil Rights Initiative crisis line: 844-878-2274
- Crisis Text Line: Text HOME to 741741
- National Suicide Prevention Lifeline: Call or text 988