Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

Castellano Français Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox

Article

I am Surprised by the Confident Use of Words Like Certainty and Causality

Data, statistics and knowledge limits
Richard Golian
Richard Golian · 2 235 reads
Hi, I am Richard. On this blog, I share thoughts, personal stories — and what I am working on. I hope this article brings you some value.
Listen to this article
0:00 / 0:00

Today I came across a post on LinkedIn by a digital specialist. He confidently claimed that with an A/B test, we can determine not just correlation, but true causality. He used words like “certainty” as if statistics were part of Newtonian physics — clear, absolute, unquestionable. I am surprised by that level of confidence. I do not have it.

We See Causes Where There Are None

Our brain craves order. When something happens after something else, we instinctively think: “the first thing caused the second.” Got a headache? Must’ve been the coffee. We are built to look for causes — even when they are not there.

From an evolutionary perspective, this makes perfect sense. If you hear a rustle in the bushes, it is safer to assume there is a tiger and run, even if it is just the wind. Evolution has taught us it is better to be wrong than dead. Maybe that is why we tend to see patterns in randomness, connections in the unconnected.

In the Middle Ages, people believed comets brought disaster. Halley’s Comet appeared in 1066 — followed by the Battle of Hastings. Case closed.

Continue

Continue reading

Enter your email to unlock this article and join the newsletter. You can unsubscribe anytime.

Common questions on this article's topic

Why is the confident use of the word certainty problematic?
Because certainty is far rarer than commonly assumed. In the article, a LinkedIn post by a digital specialist who claimed A/B tests can determine true causality prompts the reflection: statistics operates in probabilities, not absolutes. Even well-designed experiments produce higher probability, not certainty. Confusing statistical significance with proof leads to overconfident decisions based on incomplete understanding.
What is the difference between correlation and causation?
Correlation means two things occur together; causation means one actually produces the other. David Hume argued in the 18th century that we never directly observe causality — we only observe that B follows A repeatedly and assume a causal link. In the article, this philosophical insight is applied to modern marketing and data analysis: statistics can show that two variables are related, but not which one causes the other.
Why does the human brain see causes where there are none?
From an evolutionary perspective, assuming causation was safer than ignoring potential threats. If a rustle in the bushes might be a predator, it is better to run and be wrong than to stay and be dead. In the article, this survival mechanism is identified as the root of a persistent cognitive bias: we instinctively look for causes in random events, see patterns in noise, and construct explanations where none exist.
Can A/B tests prove causation?
A/B tests provide stronger evidence than observational studies because they use randomisation to control for confounding variables. However, they still operate within probability — they increase confidence that a difference is real, but they do not deliver absolute certainty. In the article, the claim that A/B tests determine true causality is challenged: even experiments produce higher probability, not proof in the Newtonian sense.
What does David Hume say about causality?
Hume argued in A Treatise of Human Nature (1739) and An Enquiry Concerning Human Understanding (1748) that we never perceive causation directly. We observe constant conjunction — that B regularly follows A — and our minds create the expectation of a necessary connection. But this connection is a habit of thought, not an observed fact. In the article, this insight is applied to challenge the casual use of the word causality in data-driven fields.
Why does this matter for professionals working with data?
Because overconfident causal claims lead to wrong decisions. In the article, the concern is that professionals in marketing and data analysis use words like certainty and causality as if they were dealing with Newtonian physics — clear, absolute, unquestionable. This false confidence can result in strategies built on correlations mistaken for causes, optimisations based on incomplete understanding, and a culture where questioning assumptions is discouraged.
Richard Golian

If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.

Related articles

Is AI Making Us Dumber?

I have conducted roughly one hundred and fifty practical interviews over the past four years. Fifty for data specialist roles. A hundred for advertising and performance marketing specialists. Almost every one of them involved sitting down with a candidate over a practical task — something close to a real problem we actually need to solve at the company. Not theory. Not trivia. Applied problem-solving. Over time, I started noticing a pattern.

14 April 2026·527 reads
What AI Hides From You

Before you can teach AI to understand anything, you need to see what it is hiding from you.

11 April 2026·535 reads
We Are Weak and They Know It

Manipulation without the feeling of being manipulated is the most effective kind.

28 February 2026 ·1 173 reads

More articles

Building an AI Stock Market Prediction System That Grades Itself

I am building an AI system to predict the S&P 500. It runs on my own machine, uses free public data — yfinance, FRED, the Shiller dataset — and grades every forecast against reality. This series documents the build itself: the decisions, the methodology, the mistakes. What I will eventually share from the running system is a separate question, and an honest one.

26 April 2026·477 reads
AI sales forecast: 9 traps so far

Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.

25 April 2026·463 reads
Will AI take my job?

Will AI take my job? A certified Google trainer told me in June 2024 that my profession would cease to exist. Twenty-two months later, my job title has not changed — but ninety percent of what I do during the day is different. I have delegated more of my thinking to AI agents than I thought possible. I am not afraid. This is why, and what it means for anyone asking the same question.

23 April 2026·260 reads
€50,000 Quote vs. Two Hours with Claude Code

One hour. Fifty-five minutes. That is how long it took to build what a Czech software firm had quoted at over €50,000. I built it with Claude Code. Not a prototype. Not a proof of concept. A working tool — the one the company actually needed. By the evening of the same day, it was running on staging. This is not about Claude Code. It is about what Claude Code exposes.

18 April 2026·602 reads
When Your AI Agent Joins the Team

The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.

8 April 2026·578 reads
Training an AI Agent That Learns Between Sessions

I wanted to build an agent that doesn't just assist. One that acts.

4 April 2026·729 reads
Local AI Model Limitations: Why I Switched from Ollama to Claude for Autonomous Agents

This is what I learned about local vs cloud AI, and why I switched to Claude Code.

3 April 2026·1 156 reads
Slovakia's Economy in 2026

What happened — and how can it be reversed?

28 March 2026·1 207 reads
NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.