Quick answer: AI-powered romance scams cost Americans over $1.1 billion in 2025, with operators running real-time deepfake video calls and LLM-driven conversations against dozens of victims at once. Spot most of them with a gesture test on video chat: ask them to hold up three fingers, wave with their left hand, or say today's date. Consumer deepfake masks fail in real time.
A San Francisco woman matches with someone on Hinge in February 2026. He is charming, attentive, asks the right questions, video-chats once for thirty seconds before the connection cuts out. Six weeks later, after every text message, voice note, and conversation has built what feels like the strongest relationship of her life, she has wired $74,000 to a crypto investment platform he introduced her to. The platform is fake. The man is fake. His face was a real-time deepfake mask running on a laptop in a fraud compound thousands of miles away.
Versions of this story now happen at industrial scale. The FBI San Francisco Division reported that romance scam losses across 14 Northern California counties more than doubled in 2025, jumping from $21.5 million to $43.3 million. San Francisco alone surged nearly 900 percent, from $734,479 to $6.34 million. Nationally, the FTC put 2025 US romance scam losses at $1.16 billion.
The technology that made this possible is the same technology covered in the pillar guide on what a deepfake actually is. What changed is the economics. A single operator running large language models for conversation and real-time face-swap models for video can now run 50+ simultaneous victim relationships. Industrial-scale dating fraud is a 2026 reality, not a hypothetical.
This post walks through how the scam works, the eight tells that give it away, the one gesture test that breaks any real-time deepfake on a video call, and what to do if money has already moved.
$6.34 million Romance scam losses in San Francisco alone in 2025, up nearly 900 percent from $734,479 the prior year. The 14-county Northern California total more than doubled to $43.3 million. Source: FBI San Francisco Division, 2026
What Is Actually Happening
The industrial version of romance fraud has three layers.
Layer 1: AI-driven conversation. A single operator runs large language models that handle the day-to-day texting, voice notes, and emotional rapport across dozens of victim relationships in parallel. The model remembers each victim's stories, their kids' names, their vacation plans, their grief. It mirrors language patterns. It writes the right "good morning, beautiful" message at the right hour. It is patient, attentive, and never makes the operator-side mistakes that used to give earlier romance scams away.
Layer 2: Real-time deepfake video. When the victim insists on a video call, the operator switches on a face-swap model that runs on consumer hardware. The model maps a synthetic face onto the operator's live webcam feed. Voice cloning runs on top of it. The result is a 30-second to several-minute video chat where the "person" the victim has been talking to appears on screen, makes eye contact, and says hello. Brief calls dodge most failure modes; longer calls require careful head-position discipline from the operator.
Layer 3: The pig butchering pivot. Once emotional trust is established, usually after weeks or months, the conversation pivots to a "great investment opportunity" the operator is supposedly making money on. The victim is walked through opening an account on a fake crypto trading platform. Initial small deposits show fake gains. Larger deposits follow. When the victim tries to withdraw, the platform demands "tax payments" or "compliance fees" that never end. Eventually the victim runs out of money. The operator disappears.
The targeting is not random. AI agents scrape social media in real time to identify lonely, recently widowed, recently divorced, or otherwise emotionally vulnerable users. The SF Standard reporting from Valentine's Day 2026 describes the targeting infrastructure in detail. The pattern is closer to programmatic advertising than to traditional fraud.
For the family-scam adjacent of this technology (a stranger using a cloned voice to impersonate a relative on the phone), see the voice cloning family scam guide. The mechanics are similar; the target is different.
Eight Tells of a Deepfake Romance Scam
These show up consistently across documented cases reported by the FBI, FTC, AARP, and McAfee through 2025 and 2026.
1. They avoid video calls or only briefly appear on them. Real people happily FaceTime, especially as a relationship gets serious. Deepfake operators dodge video, blame "bad connection," or do brief 30-second cameos and then say they have to go. Repeated avoidance of video chat is the single strongest signal.
2. When they are on video, head movement is minimal. Real-time face-swap models break when the face turns sharply away from the camera. Operators face the camera flat, make minimal head movement, and avoid showing their profile. If your video call partner moves like a mannequin, that is the tell.
3. The relationship pace accelerates faster than any real one. Within days they are calling you "soulmate," "the love of my life," talking about marriage and the future. Real relationships escalate fast under intense interest, but rarely in this specific I-am-already-fully-committed pattern. The acceleration is the LLM optimizing for the emotional-bond-to-money-request ratio.
4. Their photos are too curated. They have a small set of professionally-lit photos that look modeled. They do not have group photos with friends, awkward shots, or photos with timestamps that line up with their stated location. Real social-media-active people have varied, messy, often unflattering image archives.
5. Their story has small inconsistencies under repeat questioning. Their job, their kids' ages, their hometown, the name of the friend they mentioned last week. The LLM does not perfectly remember what it said three days ago across 50 simultaneous relationships. The memory failures are subtle, but they accumulate.
6. Communication moves to encrypted apps quickly. Tinder or Bumble or Hinge → WhatsApp → Telegram → Signal. Each step reduces fraud-detection oversight. Dating-app trust-and-safety teams flag suspicious accounts. Telegram does not. The push to move conversations off the dating app within the first few days is a structural signal.
7. The conversation eventually pivots to crypto or investments. They mention a great trading platform they have been making money on. They want to teach you. They send screenshots of their gains. This is the pig butchering signature. If a romance interest you have never met in person introduces you to a crypto trading opportunity, that is the scam.
8. Any request for money, full stop. A plane ticket. A medical emergency. Customs fees. A "small loan" they will pay back when their international wire clears. The reasons rotate. The pattern is identical: someone you have not met in person, asking for money. The FBI's working assumption is that money requests from never-met online partners are fraud by default. Treat them the same way.
For the broader visual-tells framework that applies to any deepfake video, see the 6 visual tells that instantly give away an AI face. The eight romance-specific tells above are the dating-context application of those principles.
Think you found an AI video?
Paste the URL and let the Ledger community verify it. Free.
The Gesture Test That Breaks Most Deepfakes
Consumer-grade real-time face-swap models in 2026 still fail at three specific things. Any one of them, asked spontaneously on a video call, is enough to break most masks.
Ask them to hold up three fingers next to their face. Real-time face-swap models lock to the face. When a hand crosses in front of the face, the swapped texture distorts visibly. The operator either has to drop the swap, blur the hand, or end the call.
Ask them to wave with their left hand. Hand gesture detail is computationally expensive. Models struggle with close-range fingers. A specific, unscripted gesture forces the model to render something it was not trained on.
Ask them to say today's date and a piece of news from the last 24 hours. Pre-recorded loops cannot do this. Even live operators using deepfake masks have to pause to look up the date and the news, and the pause itself is the tell.
A single one of these is enough. If the person on your video call refuses to do any of them ("the connection is bad," "my hand hurts," "let me just text you instead"), you are almost certainly talking to a deepfake.
This is the test that consumer-grade deepfake operators cannot reliably beat. Use it before any money moves, and combine it with the eight tells above.
What to Do If You've Already Sent Money
Six steps in order. Speed matters more than anything else.
1. Stop sending money immediately. Do not send a single additional dollar to "complete the withdrawal," "pay the tax," or "release the funds." None of those payments are real. Every additional dollar is lost.
2. Save everything. Screenshot every conversation, every photo they sent, every transaction record, every wallet address, every URL of the supposed trading platform. Save it to a personal cloud drive AND an offline backup before doing anything else. Operators sometimes burn the relationship and lock you out of their account once they know they have been caught.
3. Reverse-image search the photos they sent. Use Google Images or TinEye. If their profile photo appears under a different name on Instagram, OnlyFans, a stock photo site, or someone else's social account, you have proof of the fake identity. Add it to your evidence file.
4. Contact your bank or credit card company within 24 hours. Wire transfers can sometimes be reversed if reported quickly. Crypto transactions usually cannot be reversed, but exchanges sometimes freeze destination wallets if law enforcement is involved fast enough. Time matters.
5. File reports with three agencies in this order. The FBI Internet Crime Complaint Center at ic3.gov, the FTC at reportfraud.ftc.gov, and your state attorney general's consumer protection unit. The IC3 report is the most important; the FBI runs an active task force on pig butchering specifically and your case data feeds the broader pattern analysis.
6. Tell one trusted person in your life. Not because you owe anyone an explanation, but because shame is the operator's last weapon. AARP and FTC estimates put the share of romance scam victims who never report at more than half. The non-reporting is what lets these operations continue. You did nothing wrong. The operator was running a multi-billion-dollar industrial fraud against you. Telling someone breaks the isolation the operator depended on.
How to Make Yourself Harder to Target
Pre-emptive habits that defeat industrial-scale romance fraud.
Verify identity early. On the first or second video call, run the gesture test casually. Frame it as a fun thing, not an interrogation. Real people are happy to wave or hold up fingers. Operators cannot do it.
Reverse-image search profile photos before getting attached. TinEye and Google Images take 30 seconds. If their photo appears anywhere else, that is the answer.
Never send money to someone you have not met in person. This is one rule, with no exceptions. Not for emergencies, not for plane tickets, not for "just this once." The FBI's working assumption that these requests are fraud is the right one to copy. The rare exception is statistically not worth the loss.
Be wary of relationship intensity in the first few weeks. Real relationships sometimes move fast, but the specific "you are my soulmate, I have never felt this way" pattern within days is an LLM optimization, not a genuine human connection. Slow it down. Real people respect that.
Tell a trusted friend or family member about new online relationships. Not for permission, for perspective. Operators rely on victims hiding the relationship from family because family members spot the inconsistencies fast. The simple act of telling someone reduces your risk dramatically.
For the broader pattern of strangers using AI to clone real people's voices and faces, see how AI is cloning your voice and face from YouTube to sell scams. The same operator infrastructure that runs likeness-theft fraud against creators runs romance fraud against dating-app users.
Why Community Records Matter
Platform takedown removes one fake account. Romance scam operators run dozens of personas across multiple dating apps and social platforms simultaneously, and the fraud-farm infrastructure persists across individual bans.
A community-built record of flagged operator patterns, the photos they reuse, the script lines they use, the wallet addresses they funnel money to, persists across platform takedowns. The platforms close individual accounts. The community closes the operator pattern.
Closing
The targeting is industrial. The conversation is run by language models that remember every detail. The video calls use deepfake masks. The one tool the operators have not yet beaten is the spontaneous gesture test, and any victim who runs it before sending money is protected.
If you suspect someone you are talking to is a deepfake, the gesture test takes 30 seconds and does not damage a real relationship. If they pass, you are fine. If they fail, you may have just saved yourself the average $10,000 to $50,000 loss the FBI tracks per victim.
If money has already moved, every hour matters. Document, report, freeze, tell someone. The shame the operator counted on is the shame that lets the operation continue against the next person.
Related Posts
- What Is a Deepfake? A Plain-English Guide for Social Media Users: the technical foundation that explains how real-time face-swap and voice cloning work
- AI Voice Cloning Scams Hit 1 in 10 Americans. Here Is How to Protect Your Family.: the family-scam parallel where a stranger calls a relative impersonating you
- AI Is Cloning Your Voice and Face From YouTube to Sell Scams. Here Is What to Do.: the commercial-likeness theft parallel where the same operator infrastructure clones creators' faces for fake endorsements
- Deepfake Fraud Is Now Targeting Your Boss, Not Just Celebrities: the corporate parallel where deepfake calls authorize wire transfer fraud

