Article
The Meaning of Life in the Age of Machines, Algorithms, and Artificial Intelligence
In my previous post, I reflected on what we need for a good life in an era increasingly shaped by artificial intelligence and automation. I concluded that it is meaning—today, tomorrow, and ten years from now. We need our lives and the world around us to have some sense of purpose, or at least for us to be on the path to finding it. When this sense of meaning disappears, it leaves behind an emptiness that most people find difficult to bear.
We typically find meaning in work, relationships, tasks, and hobbies. But what happens when most of these opportunities vanish? When machines and algorithms take over the majority of meaningful activities? When they perform most tasks better than we do? To what will we dedicate our time when the things that once gave our lives purpose become unnecessary? This emptiness arises not because we lack material things but because we lose what truly matters to us.
This feeling of emptiness is often referred to as anxiety.
The Phenomenon of Anxiety
I first became deeply interested in the concept of anxiety when I read Being and Time by Martin Heidegger during my university studies. It was challenging reading. In seminars, we dissected the text sentence by sentence. I had never encountered anything like it before, but the ideas this book offers are well worth the effort.
Heidegger describes anxiety as fundamental to existence—something that reveals the true nature of our being. Anxiety differs from ordinary fear. Fear always has a specific object—we fear illness, loss, or failure. Anxiety, however, has no specific object. In a state of anxiety, the world as a whole appears meaningless. Activities and relationships that we usually take for granted seem to lose their significance. This is not fear of something in the world; rather, it is a revelation of the fact that our being is our own responsibility, with no predetermined purpose.
Heidegger explains that anxiety confronts us with the state of Geworfenheit (thrownness)—the realisation that we have been thrown into the world without our consent, without a clear guide on how to live within it. This recognition forces us to confront our freedom and responsibility, reminding us that no external framework provides the meaning we seek.
A key aspect of Heidegger’s conception of anxiety is that it grants us access to authentic being. When we recognise that our time is finite—we gain the opportunity to live on our own terms, rather than according to societal expectations. Anxiety, therefore, is not merely an uncomfortable state but a crucial moment of clarity in which we can reclaim the direction of our lives.
In the digital age, where algorithms and machines manage all practical matters, a unique challenge emerges. When everyday obligations that once distracted us disappear, we are left with only ourselves and one pressing question: What now?
In my view, we will stand at a crossroads. One path is to seek out new, meaningful activities. The other is to embrace anxiety and the questions and challenges it brings. Perhaps it is this second path that will ultimately lead us to a good and truly authentic life.
Ignoring these paths takes us somewhere too—just not where we would want to go. The question is how many people will follow other directions. I can imagine a future, a world where living a good life will not be that easy.
Join the Library
Full access to my thoughts, personal stories, findings, and what I learn from the people I meet.
Join the Library — €29.99 per yearGet the full article by email and feel free to reply if you want to discuss it further.
Sources
Summary
Common questions on this article's topic
What is existential anxiety according to Heidegger?
What is the difference between anxiety and fear in philosophy?
What is Geworfenheit (thrownness)?
How could AI and automation cause a crisis of meaning?
Can existential anxiety be a positive experience?
How can we find meaning in the age of artificial intelligence?
Related articles
I have conducted roughly one hundred and fifty practical interviews over the past four years. Fifty for data specialist roles. A hundred for advertising and performance marketing specialists. Almost every one of them involved sitting down with a candidate over a practical task — something close to a real problem we actually need to solve at the company. Not theory. Not trivia. Applied problem-solving. Over time, I started noticing a pattern.
Before you can teach AI to understand anything, you need to see what it is hiding from you.
When we hear artificial intelligence, many people imagine something mysterious. Something that thinks. Something that understands. It does not. I work with AI every day and the more I use it, the clearer the truth becomes: artificial intelligence is applied mathematics. It predicts what the next word should be. That is the entire mechanism.
More articles
Prague, 13 May 2026. On my way to work I started thinking about something that stayed with me for days. If most routine work on a computer disappears in the next ten years, and a large share of repetitive manual work disappears with it, what happens to the flow of money? Who pays whom for what? Which economic layers will exist, how large will they be, and what relationships will run between them? This is the six-layer map I sketched as an answer.
I am building an AI system to predict the S&P 500. It runs on my own machine, uses free public data — yfinance, FRED, the Shiller dataset — and grades every forecast against reality. This series documents the build itself: the decisions, the methodology, the mistakes. What I will eventually share from the running system is a separate question, and an honest one.
Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.
Will AI take my job? A certified Google trainer told me in June 2024 that my profession would cease to exist. Twenty-two months later, my job title has not changed — but ninety percent of what I do during the day is different. I have delegated more of my thinking to AI agents than I thought possible. I am not afraid. This is why, and what it means for anyone asking the same question.
One hour. Fifty-five minutes. That is how long it took to build what a Czech software firm had quoted at over €50,000. I built it with Claude Code. Not a prototype. Not a proof of concept. A working tool — the one the company actually needed. By the evening of the same day, it was running on staging. This is not about Claude Code. It is about what Claude Code exposes.
The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.
I wanted to build an agent that doesn't just assist. One that acts.
This is what I learned about local vs cloud AI, and why I switched to Claude Code.
Four days in Catalonia. No computer, no AI, almost no social media. I bought this notebook so that I could write down what I would think about, and what I would come across and learn on the trip.
