Video verification
Analyze short-form and uploaded video for synthetic signals.
Run the unified truth engine against video uploads or URLs for deepfake, edit, and context-risk indicators.
Last reviewed 2026-05-10 · See methodology
About this check
What AI Video Detector actually does.
The AI Video Detector treats a clip as a sequence of dependent frames plus an audio track. Generated and deepfake video tends to break in small ways across that sequence — the eye doesn't always notice, but signal analysis can. Upload a clip or paste a link to get a Truth Score with a per-signal breakdown for video.
How it works
- 1Temporal consistency checks compare adjacent frames for impossible motion or warping.
- 2Face-region analysis looks for blending seams, eye-gaze drift, and inconsistent lighting on skin.
- 3Audio-visual alignment scores whether mouth movement and speech match.
When to use it
Real situations this page is built for.
- A viral clip looks staged or the speaker's face moves unnaturally.
- A short video is being used as evidence in a dispute or news cycle.
- An interview circulating on social media may be a face-swap or voice-clone composite.
Limitations
Heavily compressed reposts (TikTok-of-Instagram-of-Twitter) lose most of the signal. Very short clips under 3 seconds give the engine little to work with.
Related checks
Detection guides
Related reading from the checkreal.ai blog.
Practical guides that go deeper into the signals this detector looks for.
Honest scope
What this detector does not do.
Naming the gaps explicitly so the score is interpreted in context.
- Whether claims made in the clip are factually true.
- Which deepfake tool produced a manipulated clip.
- Long-form video — the demo focuses on short-form sampling.
FAQ
AI Video Detector questions
Does it work on long videos?
The demo focuses on short-form video. Longer clips are sampled rather than processed end-to-end.
Can it detect a face-swap I can't see?
Sometimes. Modern face-swap pipelines leave subtle temporal artifacts that the engine surfaces as motion or face-coherence signals, but no detector is perfect.
Why did a clearly real video score low?
Heavy compression, stylized filters, and aggressive editing all look like manipulation to a detector. Treat low scores as 'review further', not 'fake'.