
A friend texts you a link. Someone made a TikTok deepfake of them. The person in the video is speaking, moving, being watched by thousands of strangers, and it is not them.
The first thing you want to do is help. The second thing you want to do is get angry. The third thing you want to do is share it with a warning so other people stop believing it.
Only one of those impulses is useful right now.
This is the exact step-by-step sequence for helping a friend who has just become the target of a TikTok deepfake. It is written for the first hour, when speed and documentation matter more than anything else, and for the days after, when the removal process drags and your friend needs something faster than TikTok's enforcement queue.
2.3 million AI-generated videos removed by TikTok in Q1 2026. The platform's systems catch mass-produced automated content at scale. Targeted deepfakes of private individuals need human reports to move. Source: TikTok Q1 2026 Transparency Report
Do Not Share the Video. Even to Warn People.
The most common mistake when a deepfake targets someone you know is sharing it to alert mutual friends.
Every share extends the video's reach. TikTok's algorithm does not distinguish between a warning share and an amplifying share. Quote-sharing, screen-recording and reposting, stitching with commentary, all of it tells the platform the content is engaging and pushes it to more people.
If your friend is the target, they do not need their face reaching another thousand viewers under a warning banner. They need the content to stop circulating.
The warning is not your job. Documentation, reporting, and community flagging are. In that order.
Step 1: Document Before You Report (5 Minutes)
This is the step most people skip, then regret.
TikTok sometimes removes content before your friend, their lawyer, or any third party has seen the evidence. Platform action eliminates the record. If your friend later needs the video for a lawsuit, an employment context, or a protective order, and it has been taken down, reconstruction is nearly impossible.
Capture the following before you report anything:
- The full video URL (
tiktok.com/...) - The posting account name and display name
- The follower count at the time you viewed it
- A screen recording of the video playing, with the URL visible in the share menu
- A screenshot of the caption and hashtags
- The date and time you captured it
Do not just save the video file. You need the contextual metadata: who posted it, from which account, with what caption. Platform removal erases all of that. Your screen recording is the only record that preserves it.
If the deepfake is sexualized, go to StopNCII.org and submit a hash in parallel. The hash-matching system blocks future uploads across participating platforms without requiring your friend to re-report every instance.
Step 2: Check Whether the Account Has Been Flagged Before (2 Minutes)
Most deepfake operations are not one-offs. Accounts that post deepfakes tend to post multiple deepfakes, often of different people, using the same generation workflow. Checking the account before you report tells you two things: whether other users have already encountered it, and whether this is a pattern or an isolated incident.
Paste the account URL or the video URL into Ledger. If the account has already been flagged by the community, you will see the verdict and the pattern. If not, your report starts the record.
Why this matters for your friend specifically: TikTok often removes individual videos but leaves the posting account active. The creator then deletes the video, switches usernames, and continues posting. Platform removal is a reset, not an ending. The Ledger community record persists across those resets. An account you flag tonight stays flagged in the record a month from now, under the new username, if the same operator produces new content.
For a friend being targeted by a creator who works at scale, this is the only mechanism that keeps working after the individual video comes down.
Think you found an AI video?
Paste the URL and let the Ledger community verify it. Free.
Step 3: Report on TikTok (3 Minutes)
The exact report path matters. Small navigation errors mean the report routes to the wrong enforcement queue.
For a deepfake of a real person, follow this path:
- Long-press the video
- Tap Report
- Select Fake or misleading content
- Select Synthetic or manipulated media
- In the free-text field, state directly: "This video is an AI-generated deepfake of [friend's name], a real person. They did not consent to this video."
If the deepfake is sexualized, do not stop at "Synthetic or manipulated media." Submit a second report at Report → Nudity or sexual activity → Non-consensual intimate imagery. This routes to a different enforcement queue that typically acts faster.
If the deepfake is defamatory (makes false statements that could harm your friend's reputation), file a third report under Harassment and bullying → Personal attack. Multiple categories engage multiple enforcement teams.
TikTok's published response time is 24 to 72 hours. In practice it varies widely. A report that routes to NCII enforcement, or a video with high report volume from distinct accounts, moves faster.
Step 4: Have Your Friend Report It Themselves
Your report is useful. Your friend's report carries significantly more weight.
TikTok's enforcement team prioritizes reports from the person being depicted. If your friend has a TikTok account, they should file their own report using the same paths above. If they do not have an account, they can use TikTok's web-based reporting form at tiktok.com/legal/report without needing to register.
Have your friend include:
- Their full legal name
- Confirmation that they are the person depicted in the video
- A statement that they did not consent to the video
- A link to ID or a verified social profile if they are comfortable sharing this
This is the highest-priority report TikTok will receive on this video. Do not skip it even if your own report has already been filed.
Step 5: Coordinate a Small Group of Private Reports (30 Minutes)
Single reports move slowly. High-volume reports on a single video move faster.
Ask trusted friends through direct message, never a public post, to report the video from their own accounts using the same category path. Five reports from five accounts move through the queue faster than five reports from one account.
Do not post a public call for reports. Public calls backfire: they alert the creator, who deletes and re-uploads from a new account before enforcement acts. TikTok's system matches at the video URL level. A re-upload resets the counter.
A small coordinated private push works. A public campaign does not.
Before reporting, your friends can run through the visual tells, covered in how to tell if a TikTok video is AI-generated, so their free-text reports describe specific AI signatures rather than general complaints.
Step 6: If the Video Is Spreading Fast, Escalate Legally
If the video is gaining significant traction, or if it is sexualized, defamatory, or tied to a broader harassment campaign, legal escalation is worth considering. The bar is lower than most people assume.
For sexualized deepfakes of real people, most US states have non-consensual intimate imagery statutes that explicitly cover AI-generated content. A cease-and-desist from an attorney carries more weight than a platform report, and an attorney can subpoena the account holder's identity from TikTok if action is warranted. The Cyber Civil Rights Initiative offers free legal resources and referrals for victims.
For defamatory deepfakes, existing defamation law applies. A statement is defamatory if it is false and harms reputation, and a video of your friend saying or doing something defamatory is, legally, a false statement.
For deepfakes used in harassment or fraud, federal cyberstalking and wire fraud statutes apply. The FBI's Internet Crime Complaint Center at ic3.gov accepts reports.
For a full breakdown of what deepfake law actually covers in 2026 and where it falls short, see Is It Illegal to Make a Deepfake? What the Law Actually Says in 2026.
What Not to Do, Even Though You Want To
Several actions feel productive but make the situation worse.
Do not reply to the account posting the deepfake. Engagement signals to TikTok's algorithm that the content is interesting. Even angry replies push it to more viewers.
Do not tell your friend to "just ignore it." The content exists. It is being seen. The psychological weight of knowing strangers are watching a fake version of you do things you did not do is real. Dismissing it is harmful advice, not helpful advice.
Do not post a thread exposing the creator. Public exposure usually drives the creator underground, where they delete the flagged content and re-post from a different account. You lose the evidence and the enforcement momentum in the same move.
Do not wait to see if it goes away. Deepfakes of private individuals rarely go away on their own. Fast action in the first 24 hours is decisively more effective than any action taken a week later.
Why TikTok's Enforcement Alone Is Not Enough
TikTok removes videos. It does not always remove accounts. And removed content is often re-uploaded from a fresh username within hours.
When the creator resets, the enforcement clock resets. The flag on the takedown video stays in TikTok's internal data, but for the next person who encounters the same creator under a new handle, there is no visible signal that anything was wrong. They have to rediscover the pattern from scratch.
A community verification record fills exactly that gap. When Ledger users flag an account, the record persists through platform takedowns and handle changes. Pattern-matching at the community level tends to be faster than platform detection at the file level, especially for targeted human-person deepfakes that do not trigger the automated systems built for mass-produced AI content.
For a friend who is being targeted, the combination of TikTok reporting plus community flagging does more than either alone. The platform addresses the immediate harm. The community record raises the cost of running the operation that produced it.
Support Your Friend Through the Process
Technical action matters. The person matters more.
Being deepfaked is psychologically real harm, not a technical inconvenience. Targeted victims report intrusive thoughts, anxiety, and reluctance to engage online for months afterward. Practical things you can offer:
- Handling the reporting steps so your friend does not have to see the content repeatedly
- A clear record of what you did and when, so they are not also carrying the documentation burden
- A break from their own social media for a few days, with someone else monitoring for re-uploads
- Recommending professional mental health support if the harassment persists or escalates
RAINN's hotline covers sexualized deepfake cases. The Cyber Civil Rights Initiative Safety Center covers broader online harassment. Both are free.
The deepfake is content. Your friend is a person. The action sequence above handles the content. The support above handles the person.
What Ledger Adds to the Process
The report-and-wait model assumes the platform will act. Sometimes it does. Sometimes it does not. In either case, the next deepfake from the same source starts from zero.
Ledger maintains a community record that survives platform takedowns and account resets. When someone pastes an account URL, they see what other users have already found and why. Your report, combined with reports from everyone else who encountered the same creator, becomes evidence that flags the next video before it spreads.
For a friend being targeted, that is the difference between fixing one video and raising the cost of running the operation that produced it.
If you found this post because a friend is in a TikTok deepfake, work the six steps above. Then paste the account URL into Ledger. The record you start now helps the next person who searches for the same account, which may well be your friend's next unexpected early warning.
Related Posts
- How to Tell If a TikTok Video Is AI-Generated: 7 Signs to Check Right Now: the visual detection guide for spotting the tells in the video itself, useful for the free-text field of your reports
- How to Report a Deepfake on TikTok, Instagram, or Facebook: the broader reporting sequence when you are not personally connected to the target
- Is It Illegal to Make a Deepfake? What the Law Actually Says in 2026: the legal options and limits when platform reporting is not enough
- The 6 Visual Tells That Instantly Give Away an AI Face on Video: the specific visual signatures worth noting in your Step 1 documentation

