Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

Castellano Français Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox

Article

Is AI Making Us Dumber?

The AI Critical Thinking Problem
Richard Golian
Richard Golian · 672 reads
Hi, I am Richard. On this blog, I share thoughts, personal stories — and what I am working on. I hope this article brings you some value.

I have conducted roughly one hundred and fifty practical interviews over the past four years. Fifty for data specialist roles. A hundred for advertising and performance marketing specialists.

Almost every one of them involved sitting down with a candidate over a practical task — something close to a real problem we actually need to solve at the company. Not theory. Not trivia. Applied problem-solving.

Over time, I started noticing a pattern.

It was not sudden. There was no single moment where the quality of candidates collapsed. But if you do this long enough — if you keep the same tasks and the same standards — you begin to see the curve.

And the curve is going down.

THE BEST RESULT I EVER SAW — WITHOUT AI

The strongest solution I have ever received came from a woman who had no AI.

No ChatGPT. No Copilot. No language model running in the background during her hour of preparation. She had the data, the task description, and her own mind.

She is a mathematics teacher. She works in education, promoting innovative Montessori-inspired approaches to teaching — the kind of person who spends her days figuring out how to make complex things understandable.

It showed.

I cannot share what she did specifically — the interviews are still ongoing and the tasks are still in use. But on a scale of one to ten, she scored a nine.

There were several moments during her presentation where I thought: this is a different league. Not one moment. Several.

She did not just answer the question. She noticed things in the data that were not part of the question. Things I had not explicitly asked about. Things that mattered.

That is the difference between AI-assisted output and genuine human problem-solving. AI answers what you ask. She answered what mattered.

To this day, no candidate with AI access has come close.

THE DECLINE OF INDEPENDENT THINKING

Three years ago, roughly one in five candidates scored above six out of ten. Their solution was solid enough that the interview became genuinely interesting — a conversation, not an evaluation.

One in twenty scored eight or higher. That meant they solved the task independently, mostly correctly, with a level of critical thinking that made it clear they understood what they were doing.

Today — for data specialist roles — I would struggle to find anyone above five.

I am not talking about a subtle shift. I am talking about a measurable decline in independent problem-solving that has reshaped what I expect when I walk into an interview room.

For advertising specialists, the picture is slightly better. But the trend is the same.

THE AI PRODUCTIVITY PARADOX — MORE TOOLS, WORSE RESULTS

Data specialist candidates receive the dataset and the task one hour before the interview. They have time to prepare. They have access to any tools they choose — including AI models that are orders of magnitude more capable than anything that existed when I started doing these interviews.

More time. Better tools. Worse results.

In any other context, we would call this a crisis.

I am not the only one seeing this pattern. A randomised controlled trial by Grace Liu and colleagues (1,222 participants across multiple experiments) found that people who used AI assistance performed significantly worse once the tool was removed — even after only ten minutes of exposure. The researchers suggest that AI conditions people to expect immediate answers, denying them the experience of working through challenges on their own. The mechanism has a name: cognitive offloading.

An MIT Media Lab study of 54 participants using EEG measurements found that brain connectivity systematically decreased with the level of AI assistance — with the ChatGPT group showing the weakest neural coupling in areas associated with executive function, semantic processing, and attention regulation. The study is a preprint and has not yet been peer-reviewed — but the direction is consistent with what I observe in my interviews.

These are different domains — mathematics, essay writing. My domain is analytical problem-solving in data and marketing. But the underlying pattern is difficult to dismiss. Better tools. Less effort. Weaker independent performance.

Here is what it looks like in practice. The candidate takes the data, feeds it into a model, and brings the output to the interview. Sometimes the output looks clean. Sometimes even impressive — well-structured, visually polished.

Then I ask the first follow-up question.

"Why did you approach it this way?"

And the room goes quiet.

The answer was never theirs.

WHEN AI REPLACES THINKING — THE BLACKBOX GENERATION

The most striking interviews I have experienced recently involve young candidates who are skilled programmers. They know how to fine-tune models, adjust algorithms, work with complex tooling.

They present their solution confidently. The output looks structured, sometimes even impressive. But after a few follow-up questions, the picture becomes clear. They fed the data into a model, treated it as a blackbox, and accepted whatever came out. They do not say it that bluntly — but that is what happened.

Continue

Join the Library

Full access to my thoughts, personal stories, findings, and what I learn from the people I meet.

Join the Library — €29.99 per year
Or just this article · €2,99

Get the full article by email and feel free to reply if you want to discuss it further.

Visa Mastercard Apple Pay Google Pay

Summary

After conducting 150 practical job interviews over four years, I observed a measurable decline in candidates' independent problem-solving. The AI productivity paradox is real — more powerful tools are producing weaker analytical thinking. Candidates delegate reasoning to AI through cognitive offloading, then cannot explain their own results. The question is not whether AI is making us dumber, but whether it is eroding the critical thinking, attentiveness, and common sense that no model can replace.

Common questions on this article's topic

Does AI make us dumber?
Research suggests AI does not make us dumber directly, but it encourages cognitive offloading — delegating thinking to AI before understanding the problem. This can reduce critical thinking, attentiveness, and independent problem-solving over time.
What is cognitive offloading?
Cognitive offloading is the tendency to delegate mental tasks to external tools like AI, reducing the effort we invest in thinking through problems ourselves. A randomised controlled trial of 1,222 participants found that even ten minutes of AI assistance led to worse independent performance afterwards.
How does AI affect job interview performance?
Based on 150 practical interviews for data and marketing roles, candidates with AI access today score lower on analytical tasks than those from three years ago who had no AI. They often produce polished outputs but cannot explain their reasoning when asked follow-up questions.
Can AI replace critical thinking?
AI can assist with data analysis but cannot replace critical thinking, contextual judgement, or common sense. The strongest performers in interviews combine mathematical ability with logic, attentiveness, and independent reasoning — skills that AI currently lacks.
Richard Golian

If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.

NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.

Related articles

AI sales forecast: 9 traps so far

Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.

25 April 2026·583 reads
Will AI take my job?

Will AI take my job? A certified Google trainer told me in June 2024 that my profession would cease to exist. Twenty-two months later, my job title has not changed — but ninety percent of what I do during the day is different. I have delegated more of my thinking to AI agents than I thought possible. I am not afraid. This is why, and what it means for anyone asking the same question.

23 April 2026·365 reads
What AI Hides From You

Before you can teach AI to understand anything, you need to see what it is hiding from you.

11 April 2026·670 reads

More articles

Where the Money Goes When AI Takes the Work

Prague, 13 May 2026. On my way to work I started thinking about something that stayed with me for days. If most routine work on a computer disappears in the next ten years, and a large share of repetitive manual work disappears with it, what happens to the flow of money? Who pays whom for what? Which economic layers will exist, how large will they be, and what relationships will run between them? This is the six-layer map I sketched as an answer.

15 May 2026·96 reads
Building an AI Stock Market Prediction System That Grades Itself

I am building an AI system to predict the S&P 500. It runs on my own machine, uses free public data — yfinance, FRED, the Shiller dataset — and grades every forecast against reality. This series documents the build itself: the decisions, the methodology, the mistakes. What I will eventually share from the running system is a separate question, and an honest one.

26 April 2026·612 reads
€50,000 Quote vs. Two Hours with Claude Code

One hour. Fifty-five minutes. That is how long it took to build what a Czech software firm had quoted at over €50,000. I built it with Claude Code. Not a prototype. Not a proof of concept. A working tool — the one the company actually needed. By the evening of the same day, it was running on staging. This is not about Claude Code. It is about what Claude Code exposes.

18 April 2026·721 reads
When Your AI Agent Joins the Team

The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.

8 April 2026·825 reads
Training an AI Agent That Learns Between Sessions

I wanted to build an agent that doesn't just assist. One that acts.

4 April 2026·876 reads
Local AI Model Limitations: Why I Switched from Ollama to Claude for Autonomous Agents

This is what I learned about local vs cloud AI, and why I switched to Claude Code.

3 April 2026·1 476 reads
Slovakia's Economy in 2026

What happened — and how can it be reversed?

28 March 2026·1 339 reads
Full AI agents or fully offline.

Four days in Catalonia. No computer, no AI, almost no social media. I bought this notebook so that I could write down what I would think about, and what I would come across and learn on the trip.

10.5.2026·323 reads
NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.