Everyone assumes automated tools are better at this than people. The numbers on images support that assumption. The numbers on video do not.
A University of Florida study published in February 2026 found that AI detection models reach 97% accuracy on deepfake images. That sounds definitive. Then the researchers tested the same models on video, and humans outperformed them. That result is not a fluke. It is a structural problem with how detection models are built.
Understanding it changes how you should think about spotting AI-generated content in your feed.
AI Is Trained on the Wrong Kind of Data
Most deepfake detection models were built to classify images. That is where the research began, where the benchmark datasets live, and where the accuracy numbers look impressive.
Video is a different problem.
A video is not a sequence of independent images. It has a time dimension. Motion connects frames. A real human blinks with a specific rhythm. Lips sync to speech in a way that follows muscle mechanics. Skin texture shifts as the face moves through light. These signals are continuous, and they carry information that a single frame does not.
Current AI detectors largely miss this. They analyze frames, often in isolation, and miss the temporal inconsistencies that accumulate across a clip. A deepfake that holds up in any single frame can collapse when you watch it at normal speed and notice the eyes do not quite track, or the jaw movement lags the audio by a fraction of a second.
This is the core finding from the Florida study. AI dominates on static detection. On video, the time dimension breaks the model.
What the Human Eye Actually Catches
The first time you watch a video, you take it at face value. The second time, if something felt off, you look differently.
Trained human observers pick up on motion artifacts that current AI pipelines are not designed to catch. Unnatural blink timing. Micro-jitters around the hairline when the head turns. Skin that looks smooth in one light and plasticky in another. Lip sync that is correct for the words but wrong for the breathing rhythm between sentences.
None of these signals are easy to articulate. But they are perceivable, and they become more perceivable with practice.
That is exactly the mechanism behind Ledger's Train Your Eye mode. It exposes you to labeled examples, builds your pattern recognition, and makes you faster at catching the signals that current automated tools walk past.
For a detailed breakdown of what those visual signals look like in practice, the guide on 6 visual tells that reveal an AI-generated face covers the perceptual heuristics in specific terms.
97% AI detection accuracy on deepfake images in the University of Florida study, February 2026, compared to human performance that exceeded AI on video classification tasks. Source: University of Florida, February 2026
The Honest Admission: This Gap Is Narrowing
AI video detection is improving. The 2026 University of Florida finding reflects the current state of the field. It does not reflect where the field will be in 18 months.
Researchers are building models that operate on temporal features directly. Optical flow analysis, blink cadence modeling, and audio-visual sync scoring are all active areas. Some of these approaches already outperform human observers in controlled lab settings with high-quality source video.
The caveat is that consumer video on TikTok and Instagram is not lab-quality. It is compressed, re-encoded, cropped, and filtered. These transformations degrade the frame-level features that image-based AI detectors rely on most. Human observers are more robust to that degradation.
For now, your trained eye is the sharper instrument on the platforms where deepfake fraud actually runs.
Think you found an AI video?
Paste the URL and let the Ledger community verify it. Free.
Community Verification Solves a Different Problem
Single-user detection, whether human or automated, has a ceiling. You are one observation. You bring one set of experiences, one attention span, one threshold for what feels wrong.
Community-based verification changes the math.
When multiple people watch the same video independently and flag the same artifacts, the signal compounds. Disagreements surface edge cases. The aggregate judgment is more reliable than any individual verdict, human or machine.
This is the structural advantage Ledger is built on. It does not ask you to be an expert. It asks you to contribute your observation to a pool of observations. The verdict that comes back reflects weighted community consensus, not a single probability score from a model trained on images.
For a practical starting point on what to look for before you report, how to tell if a TikTok video is AI-generated walks through the most common signals in short-form video specifically.
And for the definitional grounding, what a deepfake actually is covers the technical and social landscape.
The Right Tool for the Right Signal
The takeaway from the Florida study is not that AI detection is useless. It is that different tools are better at different layers of the problem.
AI excels at scale, speed, and consistency on image-level signals. It does not get tired. It does not bring priors about a creator it already trusts. Humans excel at temporal perception, contextual reasoning, and catching the accumulation of small inconsistencies across a clip.
The right answer is not humans versus AI. It is using each where it actually performs.
Right now, for video in your feed, your trained eye is still the sharper instrument. Use it.
Related Posts
- How to Tell If a TikTok Video Is AI-Generated: 7 Signs to Check Right Now: the platform-specific detection guide built around the signals human observers catch best
- The 6 Visual Tells That Instantly Give Away an AI Face on Video: the face-specific perceptual signals in practical terms
- What Is a Deepfake? A Plain-English Guide for Social Media Users: the technical grounding for how deepfakes are generated and why certain signals persist

