
Quick answer: AI deepfake job interviews now run on both sides of the hiring process. Employers face fake candidates using face-swap and stolen identities to take remote roles; job seekers face fake recruiters using deepfaked Zoom calls to extract fees or personal data. The detection tells are different for each side. Here are both.
In July 2024, security training company KnowBe4 publicly disclosed that they had unknowingly hired a North Korean IT worker. The candidate had passed video interviews. He had passed background checks against a stolen but real US identity. He had been onboarded, sent a company laptop, and started work. Within an hour of opening the laptop, he attempted to load malware.
KnowBe4 caught it. Many companies do not. The Department of Justice indicted 14 individuals in May 2025 for participating in a multi-year scheme that placed North Korean operatives into remote IT roles at US companies, generating an estimated $88 million in salary that funded the regime's weapons program.
That is one half of the deepfake job interview problem. The other half runs in the opposite direction: fake recruiters and fake hiring managers using AI-generated personas to extract upfront fees, banking information, or work-authorization documents from real job seekers. The FBI's IC3 report logged employment fraud as one of the fastest-growing complaint categories of 2024.
If you are an employer, you need to know how to spot a deepfake candidate. If you are a job seeker, you need to know how to spot a deepfake recruiter. The tells are different. Both are below.
One in four candidate profiles globally are projected to be fake or synthetically generated by 2028, according to Gartner's 2024 hiring fraud forecast. Source: Gartner, "Predicts 2024: Talent Acquisition," February 2024
The Two Scams Running in Parallel
Both scams use the same underlying technology that powers every other deepfake in our explainer on what a deepfake actually is: face-swap models that map a synthetic or stolen face onto a live video feed. The cost of running these tools fell roughly 100x between 2022 and 2025. What was a research demo is now a $20 download.
The two scams diverge from there.
Scam A: deepfake candidate. A coordinated operator, sometimes an individual, often a state-affiliated team, applies for remote-eligible roles at US, EU, and UK companies under a stolen identity. Background checks pass because the identity is real. The interview passes because the face on camera is a live face-swap onto the operator's video feed. Onboarding succeeds. Once inside, the operator either ships salary back to the home country or begins lateral movement to access company systems and data.
Scam B: deepfake recruiter. A fraudster impersonates a recruiter at a real, well-known company. The candidate gets a LinkedIn message, an email, or a Telegram outreach. A "video interview" with a deepfaked hiring manager follows. The "offer" is real-looking. Then the fraud surfaces: an upfront equipment fee, a "training cost," a request for the candidate's bank account "for direct deposit setup," or a request to scan and send their passport and SSN.
Both are rising. The detection signals are different.
For Employers: Seven Tells of a Deepfake Candidate
These are the patterns that security teams who have caught real cases keep finding.
1. Webcam refuses to switch or show second angle. Real candidates can hold their phone up alongside their laptop and show two angles. Deepfake operators cannot, because face-swap models lock to a single camera feed. Asking a candidate to hold up a piece of paper with the date written on it, in front of their face, is the single most reliable late-stage check.
2. Audio-video desync, especially on plosive consonants. Live face-swap models lag the underlying audio by 80 to 200 milliseconds. The mismatch is most visible on hard consonants such as "p," "b," and "k." Watch the lips on these specifically.
3. Lighting that does not match face movement. When a real person turns their head, the highlights on their face shift smoothly. A face-swap rendered on top of a real person's head produces lighting on the swapped face that drags by a frame or two, especially under directional light. The same six visual tells we cover for spotting an AI face on video apply here, just compressed into a 30-minute interview window.
4. Identity document inconsistencies. The candidate cannot produce their ID on camera, or the ID has been laminated or photographed in a way that obscures key fields. Always ask for live ID verification on camera. Always cross-check the name on the ID against the name on the resume, the LinkedIn, and the email domain.
5. Behavioral interview answers feel rehearsed; improvised follow-ups break the response. Deepfake candidates frequently work from a script. Ask a question that requires improvisation: "Walk me through what was on your desk yesterday." If the answer feels canned, it is.
6. Compensation flexibility that contradicts the stated location. A candidate who lists a US address but is willing to accept a salary that is 40 percent below market for that city is suspicious. A candidate who pushes for routing payments through a third-party payroll service or to an account that does not match their stated identity is more suspicious.
7. Refusal of in-person meeting at any point. Many deepfake candidate cases ended the moment the company asked them to fly out for an in-person final interview. Building one mandatory in-person checkpoint into your hiring flow, even for fully remote roles, eliminates a large fraction of the fraud risk.
Think you found an AI video?
Paste the URL and let the Ledger community verify it. Free.
For Job Seekers: Six Tells of a Deepfake Recruiter
The other side of the scam. These are the signals that predict a fraudulent recruiter or hiring manager.
1. Communication moves off corporate channels fast. Real recruiters at real companies use their company email. Fraudulent recruiters move conversations to Telegram, WhatsApp, or Signal within the first message or two. The reasoning is always similar: "easier to coordinate," or "the team uses this for hiring." It is not.
2. Email domain is almost-but-not-quite right. A real Microsoft recruiter emails from @microsoft.com. A fraudulent one emails from @microsoft-careers.co or @msft-jobs.com. Always check the email domain character by character against the company's primary public domain, not against your memory of what feels right.
3. Upfront fees of any kind. Real employers pay you. They do not charge you for equipment, for training, for "background check processing," or for software. Any request for money before you are formally on payroll is a fraud signal at 100 percent confidence.
4. LinkedIn profile that is too thin. The "recruiter" reaching out has a profile created in the last six months. They have under 100 connections. Their previous experience is at companies you cannot independently verify. Their photo reverse-image-searches to nothing or to multiple unrelated profiles.
5. Pressure to skip standard verification. Real hiring processes have onboarding paperwork, formal offer letters on company letterhead, and payroll setup that goes through HR systems. Fraudulent ones ask you to "send your SSN and bank info quickly so we can get you set up before Monday." Real companies do not run on that timeline.
6. The "video interview" feels off. The same six tells that catch deepfake candidates work in reverse for spotting deepfake hiring managers. Watch for audio-video lag, lighting that does not match head movement, and refusal to share screens or switch cameras. If the interviewer claims technical difficulties whenever you ask them to do something the camera should be able to do, that is the tell.
What to Do If You Suspect a Fake Candidate
For employers, four steps in order.
Document the suspicion before terminating. Save the recorded interview, the resume, the application, the LinkedIn URL. Capture screenshots of the candidate's webcam during the interview if you have them.
Do a live ID re-check. Schedule a follow-up "logistics call" and ask them to hold up their ID next to their face on camera. Many fraud cases end here because the operator cannot produce the document live.
Run a sanctions / OFAC screen against the resume identity. This is standard for any hire but worth re-running if a deepfake candidate is suspected. The DOJ indictments from May 2025 produced a public list of identity markers tied to the North Korean IT worker scheme.
Report to the FBI IC3 if confirmed. IC3.gov accepts reports of fraudulent employment activity. The FBI maintains an active task force on the IT worker scheme specifically. Your report adds to the pattern data.
What to Do If You Suspect a Fake Recruiter
For job seekers, three steps.
Stop responding immediately. Do not engage further. Do not "see how the next message lands." Every additional message gives them another datapoint about you.
Verify through the company's actual website. Go to the company's career page directly. Find their corporate recruiting team contact. Forward the suspicious outreach with the original headers. Real companies have abuse and fraud reporting channels.
Report to the FTC and IC3. The FTC's reporting site tracks employment scams as a category. IC3 covers the same. If money or personal data has already been sent, also report to your bank within 24 hours and freeze your credit at all three bureaus.
For the broader pattern of business-side deepfake fraud, including CEO impersonation, executive voice cloning, and business deepfake scams, the same operator playbook is running across multiple high-trust contexts.
Closing
The deepfake job interview problem is not a hypothetical 2030 issue. It is a current 2026 reality that has cost US companies meaningfully measurable money and exposed sensitive systems at firms with otherwise strong security postures. The KnowBe4 case is well-known because they disclosed it. Many cases at firms that did not disclose are not.
The good news is that the detection signals are knowable and the procedural defenses are simple. One mandatory in-person checkpoint defeats most of Scam A. One rule against any upfront fees defeats most of Scam B. The hiring process has not yet adapted to AI-generated faces. It needs to.
Related Posts
- What Is a Deepfake? A Plain-English Guide for Social Media Users: the technical foundation that explains why face-swap interviews are now possible at this cost
- How to Tell If a TikTok Video Is AI-Generated: the same detection-skill framework, applied to social video instead of webcam interviews
- Deepfake Fraud Is Now Targeting Your Boss, Not Just Celebrities: the executive-impersonation parallel, the same operator playbook with a different attack surface
- The 6 Visual Tells That Instantly Give Away an AI Face on Video: the visual-tell checklist that applies cleanly to a Zoom interview window

