All posts
Detection GuideApril 27, 2026·10 min read

Federal Law Now Forces Platforms to Remove Your Deepfake in 48 Hours. Here Is How to Use It.

Think you found an AI video?

Paste the URL and let the Ledger community verify it. Free.

Check a video

Quick answer: The TAKE IT DOWN Act, signed May 2025, makes it a federal crime to publish AI-generated non-consensual intimate imagery and forces covered platforms to remove victim-reported content within 48 hours starting May 19, 2026. The first conviction came in April 2026. To file a takedown, send a written notice with the URL, your identification, and a statement that the content depicts you without consent.

When Senator Amy Klobuchar saw an AI-generated video of herself opining on Sydney Sweeney's jeans, the deepfake was already viral. The Senator wrote about it in the New York Times: I knew AI deepfakes were a problem. Then I saw one of myself.

The federal law she helped pass earlier that same year is now the most powerful tool any deepfake victim has. As of April 2026, it has produced its first criminal conviction. As of May 19, 2026, every major social platform is required to honor 48-hour takedown requests from victims.

This post covers exactly how the law works, how to file a takedown that gets honored, and what to do when a platform fails to comply.

$200 million Stolen through deepfake-enabled fraud in Q1 2025 alone. 34 percent of victims were private citizens, up sharply from prior reporting periods. The shift is moving deepfake harm from celebrities to ordinary people. Source: Resemble AI Q1 2025 Deepfake Incident Report


What the TAKE IT DOWN Act Actually Does

The TAKE IT DOWN Act, formally the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act, was signed into law on May 19, 2025 after passing both houses with near-unanimous bipartisan votes. Senator Ted Cruz and Senator Amy Klobuchar were the lead sponsors. First Lady Melania Trump publicly supported it.

The law does three things that no prior federal law did.

Criminalizes the publication of NCII, including AI-generated deepfakes, of identifiable individuals without consent. Penalties run up to two years for adult content and three years for content involving minors.

Creates a federal civil takedown mechanism. Victims demand removal directly from platforms without going through a court first. Platforms must act on a valid request within 48 hours.

Applies to AI-generated content explicitly. Earlier federal NCII statutes were written for traditional revenge porn and had ambiguous coverage for AI-generated material. The TAKE IT DOWN Act removes the ambiguity.

The 48-hour platform compliance deadline phases in. Platforms have until May 19, 2026 to build their notice-and-takedown systems. After that date, ignoring a valid takedown request becomes a federal violation in itself.

For broader context on what is and is not legally covered as a deepfake in 2026, see Is It Illegal to Make a Deepfake? What the Law Actually Says in 2026.


The First Conviction

In April 2026, an Ohio man became the first person convicted under the TAKE IT DOWN Act. He had used AI to generate non-consensual intimate imagery of adults and minors in his neighborhood, and shared the content on a website that promoted child sexual abuse material.

Senators Cruz and Klobuchar issued a joint statement the day the conviction landed, citing it as proof that the law has teeth. Federal prosecutors used the new statute alongside existing CSAM laws.

The conviction matters for two reasons:

  1. It removes any ambiguity about whether prosecutors will use the law. They are using it.
  2. It signals to platforms that the 48-hour takedown rule is real, not theoretical. Platforms that ignore takedown requests face downstream legal exposure.

Think you found an AI video?

Paste the URL and let the Ledger community verify it. Free.

Check a video

How to File a Takedown Request

The 48-hour clock starts when a platform receives a valid notice. Most platforms have not finalized their public takedown forms ahead of the May 19, 2026 deadline. Sending a written takedown request that meets the law's requirements works regardless of whether the platform's UI is ready.

A valid takedown request must include:

Your identification. Full legal name, contact email, and a statement under penalty of perjury that you are the person depicted. Some platforms also accept a representative (a parent for a minor, an attorney, or an authorized agent).

The URL of the content. The full direct link to every instance you want removed. If the same content has been re-uploaded multiple times, list every URL.

A statement that the content was made or shared without your consent. Use language that tracks the statute: the content depicts you in an intimate manner and was published without your consent.

A signature. Electronic signatures are accepted.

Send the notice to the platform's designated abuse address. As of April 2026, the major platforms accept TAKE IT DOWN notices through:

  • TikTok: tiktok.com/legal/report (select Synthetic media or AI, then Non-consensual intimate imagery)
  • Meta (Instagram and Facebook): the NCII reporting portal at facebook.com/help/contact covers both platforms
  • X (Twitter): the safety form at help.x.com, category Non-consensual intimate imagery
  • Reddit: the reporting form at reddithelp.com, category "Report content involving you"

Document the moment you sent the notice. Take a screenshot of your sent email or the platform's submission confirmation. The 48-hour clock starts at delivery, not at the moment a human reviewer opens the ticket.


Document Before You Report

This is the step most victims skip and regret. Once you submit a takedown notice, the platform can remove the content before any third party (your attorney, a journalist, a law enforcement officer) has a chance to review it.

Capture the following before filing the request:

  • The full URL or URLs
  • The posting account name, display name, and follower count
  • A screen recording of the content with the URL visible in the address bar or share menu
  • Screenshots of the captions, hashtags, and any comments where the content was discussed
  • The date and time you captured it

Save the documentation in two places: a personal cloud drive and an offline backup. Some victims later need this material for civil suits, restraining orders, or law enforcement reports. Reconstruction after platform removal is nearly impossible.

The same documentation pattern applies for non-AI cases. The full reporting flow across all platforms is in How to Report a Deepfake on TikTok, Instagram, or Facebook.

If the content involves any identifiable real person, also submit a hash to StopNCII.org. The hash-matching system blocks future uploads of the same image across participating platforms (Meta, TikTok, Reddit, Bumble, X, OnlyFans, Pornhub, and others) without requiring you to re-report each instance.


What to Do If a Platform Misses the 48-Hour Deadline

The platform-compliance side of the law takes effect on May 19, 2026. Before that date, platforms are still building their systems and often will not respond within 48 hours. After that date, missed deadlines become a federal violation by the platform itself.

When the deadline passes:

Send a follow-up notice with timestamps. Reference your original submission and explicitly note the 48-hour deadline has passed. Keep the second notice brief and factual. Copy any platform legal contact you can identify.

File a complaint with the FTC. Use reportfraud.ftc.gov. The FTC enforces the platform-compliance side of the law and is the right agency to escalate non-compliance.

Contact an attorney for civil action. The TAKE IT DOWN Act creates a private right of action. You can sue the platform for failing to comply, sue the original poster for publication, or both. Plaintiffs' attorneys taking these cases on contingency are emerging quickly. Organizations like the Cyber Civil Rights Initiative maintain referral networks for free or low-cost legal representation.

Report to law enforcement. The criminal side of the statute is enforced by federal prosecutors. The FBI's Internet Crime Complaint Center at ic3.gov is the right channel. State attorneys general also enforce parallel state NCII laws.

If the deepfake involves a minor, contact NCMEC's CyberTipline at report.cybertip.org immediately in addition to the steps above. CSAM cases trigger a separate enforcement priority.


State Laws Still Matter

The TAKE IT DOWN Act is federal. Most states also have NCII statutes that apply to AI-generated content, and several have stronger civil remedies than the federal law.

California, New York, and Texas have particularly strong civil causes of action allowing significant statutory damages without proof of actual financial loss. Illinois and Virginia include explicit AI-deepfake provisions in their state codes.

If you live in one of these states, your state law often provides faster civil recovery than a federal complaint. An attorney experienced in NCII litigation will know which path moves fastest in your jurisdiction.


Why Community Records Still Matter

Platform takedown is the immediate response. It does not address the operator behind the content.

A scammer or harasser who creates AI-generated NCII often runs the same operation against multiple targets, on multiple platforms, under multiple usernames. Platform-level takedowns remove individual videos. They rarely remove the operator. The operator deletes related content and reappears under a new handle, and the takedown clock resets to zero.

A community-built record of flagged accounts persists across these resets. When a Ledger user flags an operator running NCII generation operations on TikTok, the flag stays attached to the operator pattern even after the platform removes the original account. The next victim who searches for the same operator sees the cumulative flag history.

This is the gap between platform compliance and durable harm reduction. The TAKE IT DOWN Act closes the platform gap. The community layer closes the operator gap.


What This Means for You

If a deepfake of you exists online today, the law on your side is stronger than it was twelve months ago, and is about to get stronger again on May 19, 2026.

Three things to do this week:

  1. Submit a hash to StopNCII.org preemptively if you have any reason to expect targeting. The hash bank does not require a specific incident; it can be set up before any deepfake exists.
  2. Save the four major platform takedown URLs above somewhere accessible. The 24 hours after a deepfake surfaces are the critical window for action.
  3. Identify a friend or family member who would document evidence on your behalf if you are not in a state to do it yourself. Documenting takes ten minutes and saves months later.

The law is a tool. The tool only works if you know how to use it before you need it.


Related Posts

Ledger App

Train your eye. Verify what you find.

Swipe real and AI-generated video clips to sharpen your detection instinct. Then paste any suspicious URL and see what the community has already flagged.

Train Your Eye
AI-generated video flagged in Ledger
AI Detected
Real video verified in Ledger
Not AI