Skip to main content
checkreal.ai

For journalists

checkreal.ai for journalists

Newsrooms are spending more of their day evaluating clips that may be AI-generated, deepfaked, miscaptioned, or staged. checkreal.ai is built to slot into the verification step of an existing editorial workflow — not to replace your judgment, but to surface the structured signals you can cite when explaining a decision.

The scenario

A reporter receives a 30-second clip that appears to show a public figure making a controversial statement. The clip is going viral. Publishing without a verification step risks amplifying disinformation. Holding the story risks being scooped by less careful outlets. The decision has to be made in minutes.

Recommended workflow

  1. 1. Score the source first

    Who posted? Account age, posting history, prior provenance issues. The score-the-source step is faster than scoring the file and often dispositive.

  2. 2. Run the deepfake detector for face and lip-sync

    Upload the clip or paste the URL. The deepfake detector reports facial-landmark stability, lip-sync alignment, and scene continuity — the structural signals editors can cite.

  3. 3. Cross-reference with the AI voice detector

    If the audio carries the claim, run it through the voice detector for clone-style synthesis markers. Combined with the visual score, this is a defensible second opinion.

  4. 4. Document the verification trail

    Save the report screenshot. Note the source URL, time accessed, and the signals that drove the decision. The signal breakdown gives readers something to evaluate, not just a verdict.

A note on limits

Treat detection as one input. Real video presented out of context is more common than full deepfakes — provenance and source verification remain primary.