The mind lies, why AI believes its own illusions
Favicon bianco

The mind lies, why AI believes its own illusions

BREAKING NEWS

NEWSLETTER

AI hallucinations

AI hallucinations

The mind lies, why AI believes its own illusions

Growing concerns are emerging about the reliability of OpenAI’s latest artificial intelligence models. According to internal testing and recent analyses, the GPT o3 and o4-mini systems—designed as logical and reasoning-based evolutions of previous generations—are showing alarming failure rates. In general knowledge benchmarks, the models produce errors in up to 79% of answers. A paradox, considering these systems were built to “reason better.” And yet, something seems to go wrong precisely in the way reasoning is simulated.

When artificial intelligence shifts from mere statistical prediction to actual logical thinking, something breaks. These newer models, praised for their “reasoning capabilities,” are displaying a counterintuitive pattern: the more they reason, the more they fail. What’s happening?

AI hallucinations

AI hallucinations

The mind isn’t just a reasoning system. It’s a system that wants to be right.

The problem doesn’t concern machines alone. Anyone who’s ever followed a perfectly rational argument only to realize it was fundamentally wrong can relate. AI fails like we do. Or perhaps it fails because we built it in our image.

AI hallucinations

Complex reasoning is no guarantee of truth

These new models no longer simply predict the next likely word. They reason. They break problems down into logical steps—just like a human analyst would. But each step introduces a new chance for error. If the first premise is wrong, the whole structure collapses. Psychology knows this pattern all too well.

AI hallucinations

Cognitive distortions: when thinking sabotages itself

Aaron Beck and Albert Ellis, pioneers of cognitive therapy, described this exact mechanism in humans: cognitive distortions. A neutral event (“I was criticized”) can trigger a cascade of flawed conclusions:
“So I must be worthless → I will always fail → No one respects me.”

Reasoning, instead of clarifying, complicates. Each step adds distortion. The mind trusts its logic and doesn’t realize it’s gone down a blind alley.

Delusion and hyper-logic: Karl Jaspers and the over-convincing mind

In his seminal General Psychopathology, Jaspers observed that paranoid delusion doesn’t come from chaos but from hyper-logic. The patient links everything: a noise in the room, a glance, a newspaper headline. Everything “makes sense,” and this airtight coherence makes the delusion impermeable from the outside. In the same way, artificial intelligence builds formally flawless chains… from false premises. The result? A “convincingly wrong” answer.

The Bayesian brain: hypotheses, updates, and cumulative error

According to predictive coding models developed by Karl Friston and Jakob Hohwy, the human brain functions like a Bayesian system: it generates predictions and updates them based on experience. But if the initial assumptions are wrong and the incoming data is interpreted incorrectly, errors accumulate. AI models behave similarly. They generate a hypothesis and refine it. The deeper they go, the more likely they are to solidify a mistake. It’s an epistemic feedback loop.

Confirmation bias and cognitive dissonance: the mind sees what it wants

The human mind isn’t just a machine for reasoning. It’s a machine that wants to be right. Psychologist Leon Festinger coined the term cognitive dissonance: when facts clash with our beliefs, we instinctively reject or reinterpret them to reduce internal discomfort. We cherry-pick evidence to fit our preconceptions. Just like AI, prompted in a certain direction, generates responses that align with what it “thinks” it knows—even if it has to hallucinate.

AI hallucinations

Human or artificial hallucinations?

What we call “hallucinations” in GPT models—plausible but false answers—aren’t really a bug. They’re a mirror of our own mental structure. AI “reasons” as we do: by breaking down, linking, deducing. But like us, it fails to recognize when the first step was wrong. And if nothing interrupts the process, it keeps going—confident in its internal coherence.

Paradoxically, it’s the ambition to reason better that makes the newest AI models more fallible. Like the human mind, the issue isn’t random error, but error that multiplies because it appears correct. In the perfect logic of thought, the greatest danger is forgetting reality.

AI hallucinations

Three principles that remind us to stay grounded

Three timeless principles, often ignored by both AI developers and philosophers of the mind, help us re-center:

Occam’s Razor

“Do not multiply entities beyond necessity.” Applied to reasoning—whether human or artificial—it means avoiding overcomplicated chains that add steps without adding clarity. Truth doesn’t need scaffolding. Every extra link is a potential weakness.

Einstein’s Principle

“Make everything as simple as possible, but not simpler.” This isn’t a call to shallowness, but to functional elegance. A system must solve a problem without overwhelming it. Intelligence lies not in the quantity of reasoning, but in the quality of synthesis.

Proportionality

Reasoning complexity should match the complexity of the problem. GPT models often violate this: they over-analyze what might require intuition or basic context. Humans do the same—turning a doubt into obsession, a question into dogma, a perception into absolute theory.

AI hallucinations

AI hallucinations

Measure is wisdom

Whether building minds—biological or digital—measure is the true mark of wisdom. And reality, however imperfect, remains the only reliable judge. Truth is not what’s coherent. It’s what withstands the test of the world.

GPT, artificial intelligence, cognitive errors, psychology, Jaspers, Beck, Ellis, Friston, Hohwy, Festinger, cognitive dissonance, AI reasoning, hallucinations,

Original article in Italian: ItalianiNews – La mente mente: perché anche l’AI crede alle sue illusioni

In Evidence

In the relentless churn of history, where papal pronouncements echo through grand cathedrals and the distant thrum of persistent conflicts reverberates across continents, one figure...
In the relentless churn of history, where papal pronouncements echo through grand cathedrals and the distant thrum of persistent conflicts reverberates across continents, one figure...