Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

Castellano Français Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox

Article

Fear is Useful: AI and Robotics as a Threat to Our Freedom and Security

AI, robotics, freedom, security threats
Richard Golian
Richard Golian · 2 668 reads
Hi, I am Richard. On this blog, I share thoughts, personal stories — and what I am working on. I hope this article brings you some value.
Listen to this article
0:00 / 0:00

Many people today fail to grasp the risks that come with the rapid advancement of AI and robotics. It is time to ask the hard questions: What scenarios await us in the future? How do we prepare for them, and how should we respond?

The first step in addressing any potential problem is acknowledging it. So I ask: what if physical control—or even elimination—carried out by AI-powered robots stops being just a sci-fi concept?

I wish I could dismiss this question as absurd and irrelevant, but I know that ignoring it or downplaying its significance will not protect us from future dangers or the loss of personal freedom.

We already have systems where AI assists in managing drones, military operations, and real-time behavioural analysis. But what if, tomorrow, a drone is hovering outside your balcony? It will not have moral dilemmas, will not feel remorse, and will not question an order. And that is assuming it even needs an order at all. If such technology falls into the wrong hands, who will stop it? Do we truly believe that our law enforcement agencies will be able to protect us from swarms of small, autonomous flying objects when the development and production of such technology are becoming cheaper and increasingly accessible? Today, terrorists must risk their freedom and lives. Technology can make terrorism anonymous, pushing it into an entirely new dimension.

The next step is a world where systems and machines make decisions on their own. But what does “decision-making” even mean in the context of AI? It is not just about following an instruction—it is about choosing between multiple options, often based on data that humans cannot even see, let alone understand. And when these decisions happen within milliseconds, is any kind of external intervention even possible?

Continue

Join the Library

Full access to my thoughts, personal stories, findings, and what I learn from the people I meet.

Join the Library — €29.99 per year
Or just this article · €2,99

Get the full article by email and feel free to reply if you want to discuss it further.

Visa Mastercard Apple Pay Google Pay

Summary

Autonomous drones that do not hesitate. Cheap robotics accessible to anyone. Decision-making at speeds where human intervention is impossible. Most people underestimate these risks. This article explains why fear, in this case, is useful.

Common questions on this article's topic

Are autonomous drones already being used in warfare?
Yes. Autonomous drone technology is actively deployed in the Ukraine conflict. Ukraine produced an estimated 2 million drones in 2024, with AI-equipped strike drones capable of determining attack timing without direct operator commands. Russia has integrated AI navigation into its Shahed drones. The scale and pace of autonomous warfare is accelerating rapidly. This is no longer a theoretical concern.
Could autonomous weapons make decisions without human oversight?
This is already happening at limited scale. AI-integrated targeting systems can compress decision cycles from hours to seconds, making meaningful human intervention increasingly difficult. The ICRC has raised concerns about weapons that self-initiate strikes in response to environmental data without the operator knowing the specific target or timing. The speed of these systems fundamentally challenges traditional frameworks of accountability and ethics.
Is there an international treaty regulating autonomous weapons?
No. Despite 161 UN member states voting in favour of discussing regulation in November 2024, there is no legally binding international treaty on autonomous weapons comparable to nuclear arms agreements. The main obstacle is consensus-based decision-making, where a single country can block proposals. In October 2024, the UN Secretary-General and ICRC President jointly called for a treaty by 2026 — acknowledging that one does not currently exist.
How accessible is military drone technology becoming?
Increasingly accessible and affordable. Autonomous weapons do not require expensive raw materials, making them cheap to mass-produce. As costs fall and technology proliferates, the barrier to entry shrinks — raising the concern that drone-based threats could become available to non-state actors, making terrorism potentially anonymous and pushing it into an entirely new dimension.
Could AI be used to develop biological weapons?
This is considered a realistic and growing threat. In a 2024 study, researchers using AI protein design tools generated over 70,000 DNA sequences for toxic variant proteins, and existing biosecurity screening tools failed to flag many of them. Harvard's Belfer Centre and the Centre for a New American Security have both identified AI-enabled bioweapons as a serious national security risk.
Richard Golian

If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.

NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.

Related articles

AI-Powered Investigations: The Future of Exposing Crime

The more I think about it, the more I realize what a fundamental issue this is.

16 March 2025·2 494 reads
Uncertain Future, Unanswered Questions

No matter how I look at the future, I see very few answers and far too many questions and problems.

25 February 2025·2 862 reads
AI as a Threat to Banks, Markets, and the Global Financial System

It is real, growing, and potentially devastating.

21 February 2025·2 044 reads

More articles

Where the Money Goes When AI Takes the Work

Prague, 13 May 2026. On my way to work I started thinking about something that stayed with me for days. If most routine work on a computer disappears in the next ten years, and a large share of repetitive manual work disappears with it, what happens to the flow of money? Who pays whom for what? Which economic layers will exist, how large will they be, and what relationships will run between them? This is the six-layer map I sketched as an answer.

15 May 2026·40 reads
Building an AI Stock Market Prediction System That Grades Itself

I am building an AI system to predict the S&P 500. It runs on my own machine, uses free public data — yfinance, FRED, the Shiller dataset — and grades every forecast against reality. This series documents the build itself: the decisions, the methodology, the mistakes. What I will eventually share from the running system is a separate question, and an honest one.

26 April 2026·612 reads
AI sales forecast: 9 traps so far

Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.

25 April 2026·582 reads
Will AI take my job?

Will AI take my job? A certified Google trainer told me in June 2024 that my profession would cease to exist. Twenty-two months later, my job title has not changed — but ninety percent of what I do during the day is different. I have delegated more of my thinking to AI agents than I thought possible. I am not afraid. This is why, and what it means for anyone asking the same question.

23 April 2026·364 reads
€50,000 Quote vs. Two Hours with Claude Code

One hour. Fifty-five minutes. That is how long it took to build what a Czech software firm had quoted at over €50,000. I built it with Claude Code. Not a prototype. Not a proof of concept. A working tool — the one the company actually needed. By the evening of the same day, it was running on staging. This is not about Claude Code. It is about what Claude Code exposes.

18 April 2026·720 reads
Is AI Making Us Dumber?

I have conducted roughly one hundred and fifty practical interviews over the past four years. Fifty for data specialist roles. A hundred for advertising and performance marketing specialists. Almost every one of them involved sitting down with a candidate over a practical task — something close to a real problem we actually need to solve at the company. Not theory. Not trivia. Applied problem-solving. Over time, I started noticing a pattern.

14 April 2026·671 reads
What AI Hides From You

Before you can teach AI to understand anything, you need to see what it is hiding from you.

11 April 2026·668 reads
When Your AI Agent Joins the Team

The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.

8 April 2026·825 reads
Training an AI Agent That Learns Between Sessions

I wanted to build an agent that doesn't just assist. One that acts.

4 April 2026·875 reads
Local AI Model Limitations: Why I Switched from Ollama to Claude for Autonomous Agents

This is what I learned about local vs cloud AI, and why I switched to Claude Code.

3 April 2026·1 476 reads
Full AI agents or fully offline.

Four days in Catalonia. No computer, no AI, almost no social media. I bought this notebook so that I could write down what I would think about, and what I would come across and learn on the trip.

10.5.2026·322 reads
NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.