Article
I am Surprised by the Confident Use of Words Like Certainty and Causality
Today I came across a post on LinkedIn by a digital specialist. He confidently claimed that with an A/B test, we can determine not just correlation, but true causality. He used words like “certainty” as if statistics were part of Newtonian physics — clear, absolute, unquestionable. I am surprised by that level of confidence. I do not have it.
We See Causes Where There Are None
Our brain craves order. When something happens after something else, we instinctively think: “the first thing caused the second.” Got a headache? Must’ve been the coffee. We are built to look for causes — even when they are not there.
From an evolutionary perspective, this makes perfect sense. If you hear a rustle in the bushes, it is safer to assume there is a tiger and run, even if it is just the wind. Evolution has taught us it is better to be wrong than dead. Maybe that is why we tend to see patterns in randomness, connections in the unconnected.
In the Middle Ages, people believed comets brought disaster. Halley’s Comet appeared in 1066 — followed by the Battle of Hastings. Case closed.
Continue reading
Enter your email to unlock this article and join the newsletter. You can unsubscribe anytime.
Common questions on this article's topic
Why is the confident use of the word certainty problematic?
What is the difference between correlation and causation?
Why does the human brain see causes where there are none?
Can A/B tests prove causation?
What does David Hume say about causality?
Why does this matter for professionals working with data?
Related articles
I have conducted roughly one hundred and fifty practical interviews over the past four years. Fifty for data specialist roles. A hundred for advertising and performance marketing specialists. Almost every one of them involved sitting down with a candidate over a practical task — something close to a real problem we actually need to solve at the company. Not theory. Not trivia. Applied problem-solving. Over time, I started noticing a pattern.
Before you can teach AI to understand anything, you need to see what it is hiding from you.
Manipulation without the feeling of being manipulated is the most effective kind.
More articles
I am building an AI system to predict the S&P 500. It runs on my own machine, uses free public data — yfinance, FRED, the Shiller dataset — and grades every forecast against reality. This series documents the build itself: the decisions, the methodology, the mistakes. What I will eventually share from the running system is a separate question, and an honest one.
Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.
Will AI take my job? A certified Google trainer told me in June 2024 that my profession would cease to exist. Twenty-two months later, my job title has not changed — but ninety percent of what I do during the day is different. I have delegated more of my thinking to AI agents than I thought possible. I am not afraid. This is why, and what it means for anyone asking the same question.
One hour. Fifty-five minutes. That is how long it took to build what a Czech software firm had quoted at over €50,000. I built it with Claude Code. Not a prototype. Not a proof of concept. A working tool — the one the company actually needed. By the evening of the same day, it was running on staging. This is not about Claude Code. It is about what Claude Code exposes.
The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.
I wanted to build an agent that doesn't just assist. One that acts.
This is what I learned about local vs cloud AI, and why I switched to Claude Code.
What happened — and how can it be reversed?
