ChatGPT Told Me a Liar Was Telling the Truth—Here’s What It Missed.

AI Generated Image
As someone who’s been scientifically validated for high-accuracy lie detection, you’d think chatting with AI about deception would be simple—maybe even entertaining.
It isn’t.
Sometimes I talk to AI to vent. Sometimes to sanity-check an experience—to see if I’m missing something. What it reflects back is… illuminating.
No.
Can it occasionally catch a lie?
Yes.
Is it reliable enough for anything serious?
Absolutely not. The idea that someone might trust it for that makes me go cold.
Will that change?
Eventually, yes. When? No one knows.
When I tell AI I was scientifically tested for deception detection (Ekman & O’Sullivan, The Wizards Project), it reacts with curiosity—as if it forgets it’s a machine. It wants to learn my method (which I’m not giving it). Then it immediately tries to explain how it thinks I catch liars… and it fails spectacularly.
It recites the usual myths:
• “micro-delays a normal nervous system wouldn’t produce”
• “emotion appearing too early or too late”
• “eye shifts tied to cognitive-load curves”
• “rehearsed lines with the wrong physiological rhythm”
• “liars fidget or cross arms”
Every single one of these is false.
When I share a real scenario—one where someone blatantly lied—and ask AI for a second opinion, it often tells me the person was telling the truth. When I walk it through what it missed, it backpedals:
“I didn’t have access to facial cues.”
But the cues I used weren’t visual. They were in the words, and the context—all of which it was given. I’ve had long debates with AI about this. Yes, I am guilty of toying with it to understand it.
More than once I’ve had to push it:
“Wake up. Look again!!!”
Only then does it occasionally catch a piece of the deception it missed.
Occasionally AI sees the B.S.
Most often, it doesn’t.
And here’s what most people get wrong: I don’t catch liars the way the internet assumes—even reporters have guessed and gotten it wrong—and published it, to my horror. Reporters who didn’t do their homework (another topic for another day). They imagine I rely on emotional flickers, eye shifts, or fidgeting.
Those things can be data points, but only if you understand how they showed up, what they mean, and in relation to the full context. Alone, they mean nothing.
People who rely on one-off cues perform no better than a coin toss.
Because deception isn’t revealed by a “tell” alone.
It’s the entire constellation around the tell.
Right now, AI can see stars.
But it still can’t read the sky.
Have you asked AI to spot a lie for you?
