
Quick answer: If your child has been targeted with an AI-generated deepfake nude, document the URL and account before doing anything else, file a takedown under the TAKE IT DOWN Act, report to NCMEC's CyberTipline, file a police report, and notify the school in writing. Do not download the image. Treat it as the federal crime it now is.
A parent gets a phone call from another parent. Their daughters are friends. The other parent's daughter saw an Instagram message with a photo of your daughter, but not a real one. A classmate fed her TikTok-cropped photo into a free undress app and the result has been circulating on Snapchat since lunch period.
This is not a hypothetical. Versions of this call have been made in Westfield, New Jersey, Beverly Hills, Almira, Lancaster, and dozens of other US schools through 2024 and 2025. The mechanics are the same every time: a few classmate photos pulled from public socials, an undress app that costs a few dollars or is free, and minutes later, a synthetic image that looks real enough to ruin a teen's life.
Until 2025, parents had almost no leverage. Platforms were slow to remove the content. Police reports were declined for jurisdictional reasons. Schools claimed the conduct happened off-campus. That changed in May 2025 when the TAKE IT DOWN Act was signed into federal law, and it changes more on May 19, 2026, when platform compliance becomes legally enforceable.
This guide walks through what to do, in order, in the first hour and the first week.
15% of high school students said they were aware of at least one deepfake of someone associated with their school, in a 2024 Center for Democracy and Technology survey of 1,316 high school students. Source: Center for Democracy and Technology, "In Deep Trouble," September 2024
What Is Actually Happening
The technology that makes deepfake nudes in schools possible is not new. What is new is how cheap and accessible it has become. A category of websites and apps marketed as "undress" tools accept a clothed photo as input and produce a fake nude as output. Most use freely available image generation models trained on adult content. Many cost nothing.
The targets are almost always girls. The producers are almost always boys at the same school. The source images come from public Instagram, TikTok, Snapchat, and yearbook archives. To understand the underlying technology and why it has spread so fast, see what a deepfake actually is: the same image-generation model fueling the celebrity scams and political fakes is the same one running these school incidents.
The legal status is unambiguous: producing, distributing, or possessing AI-generated sexual images of minors is child sexual abuse material under federal law, regardless of whether the depicted child consented to the original photo or whether the body in the image is real. Multiple states have prosecuted juveniles for this through 2024 and 2025, and the TAKE IT DOWN Act criminalizes both creation and non-consensual sharing at the federal level.
The First Hour: Do Not Make It Worse
Before any of the action steps below, three rules.
Do not download the image. If the image depicts a minor, even saving it as evidence makes you a holder of CSAM. Document the URL, screenshot the surrounding context (post, comments, account handle), but do not save the image file itself.
Do not engage the perpetrator. Do not respond to the account that posted it. Do not have your child respond. Any communication can interfere with the investigation and can also feed the harasser exactly the reaction they wanted.
Do not delete anything from your child's accounts. Some parents instinctively reach for their child's phone and start deleting tagged posts or photos. Stop. Anything that exists in the digital record may be evidence later. Lock accounts down to private, but do not delete.
The Six Things to Do, in Order
1. Document everything
Open a single document or note. For every place the image appears, record:
- The URL of the post or message
- The username and display name of the account that posted it
- The date and time you observed it
- A screenshot of the surrounding context (NOT the image itself; crop the image out)
- Any witnesses (other students who saw it, who told whom)
This documentation is what you will hand to the platform, NCMEC, the police, and potentially a lawyer. Build it in the first hour while details are sharp.
2. File a TAKE IT DOWN Act takedown with the platform
Every major platform must comply with the TAKE IT DOWN Act's 48-hour removal requirement starting May 19, 2026. Each has a dedicated reporting flow. The fastest path:
- Snapchat: in-app report, then submit a non-consensual content removal request through their help center
- Instagram and Facebook: report the post in-app, then submit a non-consensual intimate imagery form through Meta's help center
- TikTok: report in-app under "Sexual harassment and threats"
- X / Twitter: report under "non-consensual nudity"
- Discord: report directly to Trust and Safety with the URL
Use the words "non-consensual intimate imagery" and "minor" in your report. Both trigger faster review queues.
3. File an NCMEC CyberTipline report
The NCMEC CyberTipline is the federal clearinghouse for reports of online child sexual exploitation. Reports are forwarded to the FBI and to the platform. Filing here triggers a different and faster response than a platform-only report.
You will need: your contact info, the URLs, the platforms involved, and whatever account information you documented in step 1.
4. File a police report
Two filings, in this order:
- Local police: call the non-emergency line and ask to speak with a detective. Do not just submit an online report. Tell them: "AI-generated child sexual abuse material involving a minor at [school name]. I have documentation." Use those words.
- FBI IC3: file an Internet Crime Complaint Center report with the same documentation. The FBI does work CSAM cases, especially when interstate distribution is plausible (which it almost always is on social platforms).
If your local PD declines or stalls, that is what the FBI exists for. Do not wait for local to act before filing federally.
5. Notify the school in writing
Email, not phone call. Address it to the principal, the assistant principal in charge of discipline, and the district superintendent. Include:
- A statement that AI-generated CSAM depicting your child is circulating among students
- The names of any students you know are involved
- A request that the school open a Title IX investigation (sex-based harassment qualifies, even when the harasser is a peer)
- A copy of your documentation (URLs and accounts only, NOT the image)
Putting it in writing creates a record that the school received notice. Schools that fail to act after written notice expose themselves to civil liability.
6. Talk to a lawyer about civil action
The TAKE IT DOWN Act creates a private right of action: victims can sue the perpetrator and, in some circumstances, the platform that hosted the content past the 48-hour removal window. A consultation with a plaintiff-side attorney often costs nothing and clarifies whether civil action makes sense in your specific situation.
The point is not always money. Sometimes the point is a court-ordered cease and a public record that the perpetrator was held accountable.
Think you found an AI video?
Paste the URL and let the Ledger community verify it. Free.
What the TAKE IT DOWN Act Actually Forces
The federal law signed in May 2025 has two enforceable requirements that take effect May 19, 2026:
- Platforms must remove non-consensual intimate imagery, including AI-generated, within 48 hours of a verified report.
- Producing or distributing non-consensual intimate imagery of a real, identifiable person is a federal crime punishable by up to two years in prison for adults, and up to three years if the depicted person is a minor.
For the full breakdown of how to use the law as a victim, see the federal law that forces platforms to remove your deepfake in 48 hours. The school context adds one important wrinkle: minors who produce these images can themselves be prosecuted federally, and several juveniles have been in 2025 already.
This is the legal weight a parent should communicate to a school that is dragging its feet, and to a police department that is hesitating. The law is unambiguous. The 48-hour clock is real. Schools and platforms know it.
How to Talk to Your Child
Three rules that researchers who study this consistently find matter most.
Lead with: this is not your fault. Teens who have been targeted often blame themselves, especially if the source photos came from their own social accounts. The fault is on the person who used the technology to harm them. Say that out loud, even if it feels obvious.
Do not interrogate. "Why was that photo even online?" "Who took the original?" These questions feel like accountability and land like blame. Save them for after.
Name the legal weight. Tell your child what you have documented and what you are filing. Telling a teen that you have already submitted reports to the FBI and the platform changes the experience from "I am alone with this" to "the adults are handling it." That shift matters.
If you need words for the platform, the police, or the school, see exactly what to say when your friend or child is in a deepfake. The post is written for the friend's-side conversation but the script applies cleanly here too.
What Schools Should Be Doing (And Mostly Are Not)
Most US public schools do not have a written policy for AI-generated CSAM. Many treat it as a Title IX harassment matter, which is correct but narrow. A complete policy includes:
- A reporting flow that bypasses the regular school discipline channel and goes directly to administration
- A communication template for notifying affected families
- A relationship with local law enforcement that pre-clears how cases are escalated
- A digital citizenship curriculum that names AI-generated content specifically
If your child's school does not have these, ask in writing for the principal to put them in place. Keep the receipt.
Closing
The single most important thing for parents to understand is that the legal landscape changed in May 2025 and changes again in May 2026. Until then, the practical advice was triage. After May 19, 2026, the practical advice is a documented sequence: document, takedown, report, notify, escalate. The platforms and the police and the schools all know what they are required to do. Your job is to make them do it on the documented timeline.
The cases that have ended best for victims through 2024 and 2025 are the ones where parents acted in the first 48 hours. The cases that have lingered are the ones where parents waited to see if it would die down. It does not die down. It moves to the next platform.
Related Posts
- Federal Law Now Forces Platforms to Remove Your Deepfake in 48 Hours: the legal mechanism behind every action step in this guide
- What Is a Deepfake? A Plain-English Guide for Social Media Users: the underlying technology that powers undress apps and face-swap tools alike
- Your Friend Is in a TikTok Deepfake. Here Is Exactly What to Do.: the friend's-side companion to this guide, with the same documentation discipline
- AI Voice Cloning Scams Hit 1 in 10 Americans. Here Is How to Protect Your Family.: the same family-stakes playbook applied to the voice-cloning vector instead of image-based abuse

