Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

Castellano Français Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox

Article

How Is AI Different from Human Intelligence?

AI vs human intelligence — what 99% of people miss
Richard Golian
Richard Golian · 2 072 reads
Hi, I am Richard. On this blog, I share thoughts, personal stories — and what I am working on. I hope this article brings you some value.

When we hear "artificial intelligence," many people imagine something mysterious. Something that thinks. Something that understands.

It does not.

I work with AI every day — I build automations with it, code with it, write with it. And the more I use it, the clearer the truth becomes: artificial intelligence is applied mathematics. It processes data and calculates probabilities. A language model — the technology behind ChatGPT, Claude, and the rest — predicts what the next word in a sequence should be.

That is the entire mechanism.

How does AI actually work?

Take a simple example. You give AI a large set of photographs of stones. It analyses the pixel data — colours, edges, textures — and calculates statistical patterns. When you show it a new image, it does not recognise a stone. It calculates the probability that this new image matches the patterns it has seen before.

The same principle applies to text. When you ask an AI a question, it does not understand your question. It processes the statistical relationships between words — which words tend to follow which — and generates the most probable continuation.

It does not think. It calculates.

It applies math. If we simplify it: it is still just a calculator.

A calculator that writes essays, generates code, and holds conversations — but a calculator nonetheless. The transition from simple arithmetic to something that looks like thinking is not a leap in kind. It is a leap in scale. More data, more parameters, more computation. The mathematics got more sophisticated, but mathematics it remains.

How is artificial intelligence different from human intelligence?

AI does not recognise the world the way humans do. It does not understand it through practical experience. It calculates the probability that something is this or that.

But how does a human understand what a stone is?

First and foremost, by using it — for hunting and protection in the Stone Age, for processing meat, making tools, and so on. A human understands a stone as something useful for something else — as a tool. We use it even before we explicitly name it.

The German philosopher Martin Heidegger described this exact phenomenon in 1927. He called it Zuhandenheit — readiness-to-hand. We understand things not by studying their properties from a distance, but by using them in the context of our lives. A stone is not "an object with properties X, Y, Z." It is something you hunt with. Something you defend yourself with. Something you build with.

This is visible in everything we use — not just stones. A knife, a door handle, a steering wheel. You understand these things through practice, not through description. A child does not learn what a spoon is by reading its Wikipedia entry. They learn by using it, failing, trying again.

AI has no life in which to use things. It has data about how other people used them.

That is the difference.

A human tries to survive and live in a way that seems good — and on that path, they come to know the world. You never approach a problem with an empty head — you always bring everything you have lived through. Your experience, your intuitions, your past failures. Artificial intelligence has training data. That is not the same thing.

Does AI understand what it generates?

AI generates text, code, analyses. But does it understand any of it?

No.

Understanding in the case of AI can only be imitated. Imitated very well — so well that 99% of people cannot tell the difference. But it is still not understanding.

I asked Claude — the AI I use daily — whether it could tell me its confidence level on a factual answer. The response was straightforward: "I am not a system that calculates explicit probabilities over facts. The probability is over language, not over facts."

That single sentence captures the entire distinction. AI does not verify what it says. It predicts what sounds right. When it gives you a perfect answer about SQL syntax, it is not because it understands SQL — it is because SQL is extensively documented and the statistical patterns are clear. Give it a problem that requires genuine contextual reasoning — the kind where multiple pieces of information interact in ways not well-documented across the internet — and it falls apart.

I have seen this firsthand. Analytical tasks where the answer depends on recognising that one piece of data influences another — AI misses it entirely. It does not have what you might call plain common sense. Not because it is stupid, but because common sense comes from living in the world — not from reading about it.

There is a meaningful distinction between AI "knowing" something and AI "understanding" something. If you give it a verified source of truth and instruct it to always defer to that source, you might say it "knows" what that source contains. But understanding? Understanding means you can apply knowledge in situations the source never anticipated. That requires judgement. That requires a life.

When AI surprises you — and when it fails spectacularly

AI is entertaining in unexpected ways. I ran a pilot project recently. The AI estimated the work would take three to five days.

The same AI completed it in one hour and fifty-four minutes.

Continue

Continue reading

Enter your email to unlock this article and join the newsletter. You can unsubscribe anytime.

Summary

Artificial intelligence is applied mathematics — a sophisticated calculator that predicts the next word. Humans understand through lived experience, through what Heidegger called Zuhandenheit. AI imitates understanding so well that 99% cannot tell the difference. But the mechanism is fundamentally different. It determines whether you use the tool, or the tool uses you.

Common questions on this article's topic

What is artificial intelligence in simple terms?
At its core, artificial intelligence is applied mathematics. A language model predicts the most probable next word in a sequence based on statistical patterns from training data. It processes data and calculates probabilities — it does not think or understand. In the article, this is illustrated with a simple example: AI analyses many pictures of stones and calculates the probability that a new picture also shows a stone.
How is AI fundamentally different from human intelligence?
Humans understand the world through lived experience — through what the philosopher Heidegger called Zuhandenheit (readiness-to-hand). We know what a stone is because we use it as a tool, not because we study its properties from a distance. AI has data about how others experienced things, but no experience of its own. It calculates probabilities rather than understanding meaning.
Does AI understand what it generates?
No. Understanding in the case of AI can only be imitated — imitated very well, but it is still not understanding. As one AI system stated directly: the probability is over language, not over facts. AI does not verify what it says. It predicts what sounds right. When it gives a perfect SQL answer, it is because SQL is well-documented — not because it understands SQL.
Can AI be creative?
That depends on what creative means. The number of possible combinations from everything invented in human history is effectively infinite — you can create something genuinely original just by combining existing elements in new ways. AI does this. But it cannot create from lived experience — from heartbreak, curiosity, or the specific way light reminds you of something you cannot name. That kind of creation requires a life.
Why does AI hallucinate?
AI hallucination is not a bug — it is fundamental to how language models work. AI predicts the most probable next word based on patterns, but it does not verify whether the result is factually true. Sometimes patterns produce truth, sometimes they produce convincing nonsense. This will not change because the mechanism itself does not include fact verification.
Will AI ever truly understand like humans do?
AI may imitate understanding so well that 99% of people cannot tell the difference. But by definition, a system that predicts the next word based on statistical patterns cannot achieve genuine understanding. It can simulate comprehension, but the underlying mechanism is fundamentally different. Teaching AI to verify facts would require first agreeing on what truth is — and we can only agree on the most basic things.
Is AI just a sophisticated calculator?
Yes — that is the core argument in the article. The transition from simple arithmetic to something that looks like thinking is not a leap in kind but a leap in scale. More data, more parameters, more computation. The mathematics got more sophisticated, but mathematics it remains. A calculator that writes essays and holds conversations is still a calculator.
What is the biggest risk of AI for society?
The growing gap between people who understand how AI works and those who do not — a gap that only deepens. AI is most useful to people who already think clearly and have their own ideas. For those who rely on it without critical thinking, it becomes a crutch that weakens independent thought. The worst scenario is already unfolding.
Richard Golian

If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.

Related articles

The Meaning of Life in the Age of Machines, Algorithms, and Artificial Intelligence

When this sense of meaning disappears, it leaves behind an emptiness that most people find difficult to bear.

22 February 2025·4 225 reads
Is AI Making Us Dumber?

I have conducted roughly one hundred and fifty practical interviews over the past four years. Fifty for data specialist roles. A hundred for advertising and performance marketing specialists. Almost every one of them involved sitting down with a candidate over a practical task — something close to a real problem we actually need to solve at the company. Not theory. Not trivia. Applied problem-solving. Over time, I started noticing a pattern.

14 April 2026·603 reads
What AI Hides From You

Before you can teach AI to understand anything, you need to see what it is hiding from you.

11 April 2026·619 reads

More articles

Full AI agents or fully offline.

Four days in Catalonia. No computer, no AI, almost no social media. I bought this notebook so that I could write down what I would think about, and what I would come across and learn on the trip.

10.5.2026·245 reads
Building an AI Stock Market Prediction System That Grades Itself

I am building an AI system to predict the S&P 500. It runs on my own machine, uses free public data — yfinance, FRED, the Shiller dataset — and grades every forecast against reality. This series documents the build itself: the decisions, the methodology, the mistakes. What I will eventually share from the running system is a separate question, and an honest one.

26 April 2026·572 reads
AI sales forecast: 9 traps so far

Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.

25 April 2026·557 reads
Will AI take my job?

Will AI take my job? A certified Google trainer told me in June 2024 that my profession would cease to exist. Twenty-two months later, my job title has not changed — but ninety percent of what I do during the day is different. I have delegated more of my thinking to AI agents than I thought possible. I am not afraid. This is why, and what it means for anyone asking the same question.

23 April 2026·325 reads
€50,000 Quote vs. Two Hours with Claude Code

One hour. Fifty-five minutes. That is how long it took to build what a Czech software firm had quoted at over €50,000. I built it with Claude Code. Not a prototype. Not a proof of concept. A working tool — the one the company actually needed. By the evening of the same day, it was running on staging. This is not about Claude Code. It is about what Claude Code exposes.

18 April 2026·686 reads
When Your AI Agent Joins the Team

The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.

8 April 2026·745 reads
Training an AI Agent That Learns Between Sessions

I wanted to build an agent that doesn't just assist. One that acts.

4 April 2026·838 reads
Local AI Model Limitations: Why I Switched from Ollama to Claude for Autonomous Agents

This is what I learned about local vs cloud AI, and why I switched to Claude Code.

3 April 2026·1 334 reads
NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.