In February 2024, a finance employee at a multinational firm in Hong Kong joined a video call with what appeared to be the company's CFO and several other senior colleagues. He was instructed to authorize a transfer of $25 million USD. He did. Every person on the call was a deepfake. The case, reported by Hong Kong police and later confirmed by Arup, became the first widely documented instance of a real-time deepfake video call used to authorize a major financial fraud.
That was two years ago. The technology has not stood still.
This Is Not a Phishing Email Problem
Corporate fraud has existed for decades. What changed in 2024 and accelerated through 2026 is the addition of convincing video and voice to what was previously a text-based attack vector.
The older version of this attack is called Business Email Compromise (BEC). A criminal impersonates an executive over email and instructs a subordinate to transfer funds. BEC has caused over $50 billion in global losses since 2013, according to FBI IC3 reports.
The newer version adds a layer that email could never provide: visual confirmation. An employee who has been trained to spot suspicious emails is not necessarily prepared to distrust the face of their CFO speaking to them directly on a video call.
The attack depends on two facts:
- Most senior executives have substantial publicly available video and audio: conference talks, earnings calls, press interviews, LinkedIn posts, media appearances
- That material is sufficient to train a convincing real-time face and voice clone
How the Attack Works
The mechanics vary by case, but the general pattern is consistent across documented incidents.
Step 1 — Target selection. The attacker identifies a company with a finance authorization process that involves a small number of decision-makers. Mid-size companies with international operations are frequent targets because wire transfers to overseas accounts are normal business activity and less likely to trigger suspicion.
Step 2 — Source material collection. The attacker gathers public video and audio of the executive being impersonated. For a CFO or CEO of any company with media exposure, this is not difficult. A 30-second clip from an earnings call is often enough.
Step 3 — Tool preparation. Commercial deepfake tools available in 2026 can generate a real-time face and voice overlay. The attacker does not need to build this from scratch. Several platforms offer it as a subscription service.
Step 4 — Pretext call. The attacker contacts the target employee by email or messaging platform, warning of a confidential and urgent financial matter. The framing is designed to create urgency and suppress the employee's instinct to verify through normal channels.
Step 5 — The video call. The employee joins a call, sees a convincing version of their executive, and receives direct authorization to transfer funds. Other fake participants may appear to give the call the feel of a normal meeting.
Step 6 — Transfer. Funds move to an account controlled by the attacker, typically through a chain of jurisdictions that complicates recovery.
Who Is Being Targeted in 2026
Based on FBI IC3 data and documented cases across the U.S., UK, and Asia-Pacific, the primary targets in 2026 are:
Finance employees below the CFO level. Controllers, accounts payable managers, and treasury analysts who have authorization to execute transfers but may not have direct daily contact with the executives being impersonated.
Companies undergoing M&A activity or restructuring. These environments create legitimate reasons for unusual or large transactions, and employees may be less likely to question an urgent transfer request from leadership during a stressful period.
Professional services firms with international client billing. Law firms, consulting firms, and accounting firms regularly wire large sums internationally. A fraudulent instruction blends in.
Mid-size companies without dedicated fraud response teams. Enterprise companies have more verification layers. A company with 100 to 2,000 employees may have a single person responsible for authorizing large transfers, with less oversight.
What Verification Now Has to Look Like
The standard advice for BEC, "call the executive back on a known number to verify," is no longer sufficient if the attacker can also clone the voice in a follow-up call.
A credible verification protocol in 2026 has to include elements that cannot be faked in real time:
Pre-agreed challenge phrases. Executive and finance teams establish a word or phrase in advance that must be present in any request above a threshold amount. The phrase is never transmitted digitally and is changed on a regular schedule.
Out-of-band text confirmation. Before executing any large transfer, the authorizing employee sends a text message to the executive's personal mobile number (not the corporate directory) and waits for a reply. This does not verify the original call was real, but it breaks the attack chain.
Transfer delay windows. A mandatory 24-hour delay on any transfer above a set threshold, regardless of claimed urgency. Urgency is the primary psychological lever in this attack. Removing the ability to act immediately removes the most effective pressure.
Video call authentication protocols. Some companies are implementing in-call verification steps, such as having executives raise both hands or perform a specific physical action that is harder to replicate in real time. This is a stopgap while better solutions develop.
Facial landmark verification tools. Several enterprise security platforms now offer real-time deepfake detection for video calls. These tools are imperfect but add a detection layer for the lower-quality fakes that are still the majority of attacks.
The Individual Version of This Attack
CEO deepfake fraud is not limited to corporate finance. The same attack structure runs against individuals.
The FTC has documented a growing category of "family emergency" scams where victims receive a video or voice message appearing to be from a child, grandchild, or other relative in distress, followed by an urgent request to wire money. The source material for these attacks is increasingly pulled from social media, where people post videos of themselves and their family members regularly.
The emotional leverage in the family version is more immediate than the corporate version. A parent who sees what appears to be their child on a video call has less psychological space to apply skepticism.
The detection approach is the same in both contexts: establish a shared verification phrase in advance, use a separate channel to confirm, and never transfer money based solely on what you saw in a video call.
Think you found an AI video?
Paste the URL and let the Ledger community verify it. Free.
What Ledger Is Building for This
Corporate deepfake fraud operates at a scale and in channels that differ from the consumer social media context Ledger currently focuses on. But the underlying detection problem, identifying AI-generated faces and voices in video, is the same.
The community detection model, where multiple people assess the same content and build a consensus signal, applies to this problem too. Accounts and videos flagged by the Ledger community contribute to the pattern database that detection tools draw on.
If you have encountered a suspicious video in a corporate or professional context, paste the URL into Ledger and see what the community has already found.
What to Do Right Now
If you are an individual: Establish a family verification phrase. Tell your family about voice and video cloning. If you receive an urgent request from a family member over video, hang up and call them back on a number you already have.
If you are responsible for financial controls at a company: Audit your wire transfer authorization process. Identify whether your current verification steps can be bypassed by a convincing video call. Add a non-digital verification step for transfers above your threshold.
If you have already transferred funds: Contact your bank within the first hour. File a report with the FBI Internet Crime Complaint Center at ic3.gov. In the U.S., the FBI's Financial Fraud Kill Chain can sometimes intercept fraudulent transfers if the bank is notified quickly enough.
Related Posts
- What Is a Deepfake? A Plain-English Guide for Social Media Users: the technical background on how these videos are made, including voice and face cloning methods
- That Celebrity Crypto Video Is Probably a Deepfake: AI-generated endorsement scams using the same face-cloning technology against consumers instead of finance teams
- Free vs. Paid Deepfake Detection Tools: What You Actually Need: the enterprise detection stack referenced in this post and where it does and does not fit individual use
- Is It Illegal to Make a Deepfake? What the Law Actually Says in 2026: the wire fraud and financial statutes that apply when a deepfake is used to authorize a transfer

