AI explainers
Why AI detectors are not 100% accurate
No AI detector is 100% accurate. Anyone who tells you otherwise is selling something. Understanding the limits is the difference between using detection well and using it badly. This is the honest version.
Detection is a moving target
Every new generative model shifts the patterns detectors rely on. A detector trained on 2023 diffusion outputs will perform worse on 2026 outputs. Vendors retrain frequently, but there is always a window where new models leak through old detectors.
False positives are unavoidable
Many real images and texts share surface patterns with AI output. Studio-lit professional headshots look smooth like AI portraits. Formal academic English has the cadence of LLM prose. Heavily compressed videos accumulate artifacts that mimic generation. Detectors will flag some of these.
False negatives are also unavoidable
Adversarially-tuned outputs — generations specifically tweaked to evade common detectors — can pass. Tools designed to humanize AI text or to add noise to AI images exist for exactly this purpose. A 'likely authentic' result on an unknown source is weaker evidence than the same result on a known source.
How to use detection responsibly
Treat the score as one signal among several. Always combine with provenance (who posted it, where, when), context (does the claim match independent reporting), and a manual sanity check. Never use detector output as the sole basis for a high-stakes decision — academic integrity, employment, defamation, or legal action.
When to trust the score most
Detectors are most reliable on clear cases: obvious AI portraits with hand and ear artifacts, clearly cloned voices with no breath gaps, fabricated screenshots with broken UI templates. They are least reliable on borderline cases — exactly the cases where you most want a clean answer. Plan accordingly.
Try the tool
AI Generated Content Checker
Use the universal checker for triage when you're not sure which specialized detector fits.