Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

Castellano Français Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox

Article

We Are Weak and They Know It

Manipulation, propaganda and cognitive bias
Richard Golian
Richard Golian · 1 271 reads
Hi, I am Richard. On this blog, I share thoughts, personal stories — and what I am working on. I hope this article brings you some value.
Listen to this article
0:00 / 0:00

I am writing this post primarily for myself — as a map of the traps built into today's world. A way to navigate it more clearly. This is a mental exercise for me.

The environment we live in today is not natural to us. It has been optimised for manipulation. It was built through the internet, social media, and the surrender of our privacy.

We are predictable. Not as individuals, but as masses.
Not because we are stupid.
Not because we are inferior.
But because we are living creatures with certain traits hardwired by evolution.

They know this well.

Those who hold power often understand one fundamental thing — the weakness of human nature. On that weakness, you can build a business, a marketing strategy, a political campaign, or a system of manipulation.

We have a brain that evolved for hundreds of thousands of years in conditions dramatically different from today. We have instincts designed for survival. We have needs that once held us together as a tribe. We have mental shortcuts meant to save energy.

But today, the mechanisms that helped us survive are turning into vulnerabilities.

An Evolutionary Disadvantage in the Modern World

Imagine a human being 50,000 years ago.

They live in a small group. They know everyone. They know who is strong, who is reliable, who is dangerous. Food is limited. Every unit of energy matters. Every mistake — like failing to react to a threat — can mean death. Every threat is physical and immediate: a predator, a hostile group, hunger, disease. Information spreads slowly, orally, in person, inside a small community.

The brain optimised for:

  • seeking calorie-dense food
  • fast, strong threat response
  • belonging to a group

These mechanisms were an advantage.

The one who reacted quickly to a rustle in the grass survived.
The one who stayed close to the group survived.

The brain learned to save energy and rely on shortcuts. Trust familiar faces. React to strong emotions.

Today:

  • food is everywhere
  • threats are often informational, not physical
  • groups are often closed opinion bubbles, cults, or sects — mostly in the digital space

The threat is no longer a lion. It is a headline. It is a notification. It is a message engineered to trigger fear.

Belonging once meant physical protection. Today it often means approval through likes, identity confirmation, opinion confirmation.

We live in a world that changes faster than humans do.

Calorie-Dense Food With No Nutritional Value

Let us start with something that seems banal: food.

Our bodies respond strongly to sugar and fat. Those who could find calorie-rich food had a higher chance of survival.

The body built a strong reward response around them. When we found something sweet or fatty, the reward system activated. That system existed to make sure we remembered the experience and repeated it.

But today we do not live in scarcity. We live in excess.

The largest food companies invest enormous resources into researching which combination of taste and texture produces the highest customer satisfaction — so they buy again. So the reward system sends the signal: do it again. At the same time, they optimise costs: shelf life, storage, logistics. This drives the growth of ultra-processed foods with minimal nutritional value.

Their availability and addictive design lead to overeating and health problems. Our mind does not treat calorie surplus as a danger. It treats available energy as an opportunity.

I have written before about how almost completely cutting these foods out eliminated my health issues.

I saw firsthand that what once built us up now makes us dependent on healthcare, medication, and supplements.

What once strengthened us now wears us down. And it does not stop with food.

Sex as a Tool of Power

This is one of the oldest mechanisms we have. Evolutionarily, it ensures reproduction. Which is why its motivational force is extreme. The brain responds to sexual stimuli intensely — especially when novelty and visual stimulation are involved.

In the digital environment, sex is available in almost unlimited quantities — the way calorie-dense food is in supermarkets. Pornography attacks this drive directly. The brain can respond to digital novelty even more intensely than to the natural context of a relationship, because the digital environment enables infinite variation of stimuli.

The problem is not sexuality itself. Sexuality is a natural part of being human. The problem begins when our intimate life becomes recordable — and archivable. That creates a new kind of vulnerability — one that is systematically exploited today.

Imagine a person with a certain status. A public image, a family, a career, political or economic power. Their credibility is part of their capital. If they get involved in something that should not be public, an information asymmetry emerges. Someone knows something the public does not. And the moment there is evidence — a recording, a photo, a message — leverage appears.

A strong drive can lead to creating material that becomes a reputational risk. And reputational risk can be used for pressure or blackmail. The pressure does not even have to be explicit. It is enough that it is silently understood: we know about you.

Technological progress has amplified this vulnerability dramatically. Today it is possible to record communication discreetly, store data, archive private moments. The cost of compromising someone is lower than at any point in history. Technology lowers barriers. And when a powerful biological drive meets low recording barriers and high reputational value, a mechanism emerges that can be exploited systematically.

A biological instinct becomes a tool of control. It is a power mechanism that often influences companies, states, international organisations, and scientific circles — and therefore has a real impact on all of our lives. There are individuals and entities whose business and power are built on this — on one of our greatest vulnerabilities.

Again: what once helped preserve the species and build stronger communities becomes a weakness today — and therefore a tool of power and control.

And this is only one of many ways it happens.

Marketing, Gambling, and Financial Illiteracy

Many of today's traps attack our financial health.

People do not respond symmetrically to gains and losses. Loss hurts more than gain pleases — a phenomenon described in the Nobel Prize-winning research of Daniel Kahneman. The prospect of losing hits us harder than an equivalent gain.

This is exploited deliberately. That is why tactics like "only a few left in stock" or "12 people are viewing this product right now" work. Artificial urgency reduces the time available for rational decision-making. When a person feels they might lose something, they act faster and think less. We shift into loss-prevention mode, not analysis mode. In that state we do not calculate long-term consequences. We react impulsively.

But there are also traps built on hope. Gambling promises a shortcut. It promises to skip the long and difficult process of building value. Mathematics is ruthless. In the long run, the house always wins.

Commercial marketing and gambling systematically extract money from our wallets. Not randomly. Using well-documented patterns in how we think and behave. Combined with financial illiteracy, this makes the poor poorer and creates a cycle that is very difficult to escape.

Mass behaviour is predictable. And where behaviour is predictable, it can be modelled. And what can be modelled can be optimised, monetised — or otherwise exploited.

Anger and Fear in Political Marketing

Modern digital platforms do not display content based on truthfulness or quality. The algorithm rewards content that captured attention and triggered a response — a click, a comment, a share.

And what generates the strongest reactions? Emotions. For example: anger.

Anger is mobilising. In an evolutionary context, it signalled readiness for conflict. It activates the body, accelerates response. But it also weakens analytical thinking. When we are angry, we react faster but think less.

Content designed to provoke outrage exploits this precisely. It triggers the sense that "this is unacceptable", "this must be stopped", "this is an attack on us." It often works with extremes, oversimplifications, or things stripped of context. Provocative headlines, reductive interpretations, deliberately inflammatory framing — all with one goal: an immediate emotional reaction.

An angry person clicks more. Comments more. Shares more. And thereby increases the reach of the content that angered them. Outrage becomes the engine of the attention economy. It generates attention, generates profit — and deepens social polarisation. Groups seal themselves into opinion bubbles. Criticism becomes attack. Discussion becomes combat.

Even stronger than anger is fear.

Fear runs even deeper evolutionarily. When we are afraid, the brain shifts into defensive mode. Areas responsible for rapid threat response activate. In a state of fear, we do not seek complex solutions. We seek protection. Certainty. Someone or something to "fix it."

Political marketing built on fear targets the survival instinct and the need for safety. The threat does not need to be imminent. It does not even need to be as real as it is presented. When we feel threatened, we are more willing to accept information that offers a quick sense of orientation and safety — even if that information is inaccurate or misleading.

Conspiracies

Conspiracies attack the threat instinct, the need to understand the world, and the need to find someone to blame. They offer a simple explanation for complex events and a clearly named enemy. They give the sense that chaos has a hidden order — and that someone knows "the real truth."

When simple stories, half-truths, falsehoods, and conspiracies begin spreading unchecked, a broader problem emerges.

Information Chaos

Information chaos diverts attention from verifiable facts — and this state is often precisely the goal of various state and non-state actors. When the space is flooded with extreme claims, rational discourse disappears. Chaos is easier to manipulate within. The less we believe that anything can be known, the more readily we accept emotional narratives instead of facts. And the more easily we conform to whichever group offers us at least some framework.

Cults, Identity, and the Need to Belong

In chaos, people naturally turn to their group. The need to belong is ancient. It worked because group loyalty increased the chances of survival. Alone in nature, a person rarely survived. The group meant safety.

Today this need leads to polarisation and blind trust. The cult is a prime example: a strong leader, a clear enemy, a simple story, gradual isolation from alternative perspectives.

When identity becomes fused with an opinion, criticism of that opinion is experienced as an attack on the person. And at that point, rationality retreats.

The need to be part of a community can be exploited through cult dynamics and strong group identity. A person then stops searching for truth and starts searching for safety. Not for correct answers, but for their place in the group.

This bias can be exploited through culture wars. When people split into camps, truth becomes irrelevant. What matters is who said it and which side they are on.

Every person has their own filters, their own traumas, their own need for recognition or status. And those filters can be targeted with precision.

The Illusion of Spontaneous Support

When we see that "everyone" believes something, we tend to conform. Social proof is a powerful mechanism. It is exploited by creating the impression of grassroots support — coordinated accounts, artificially amplified content, campaigns designed to look organic.

When You Think You Decided for Yourself

The most sophisticated mechanism is not coercion. It is the design of the environment.

If someone selectively provides you with information, frames the interpretation, highlights the risks, and suppresses the alternatives — you might decide exactly as intended. And you will feel free.

Manipulation without the feeling of being manipulated is the most effective kind.

Specific Mechanisms of Manipulation

We do not experience the world like a computer processing all data objectively and without emotion.

Continue

Join the Library

Full access to my thoughts, personal stories, findings, and what I learn from the people I meet.

Join the Library — €29.99 per year
Or just this article · €2,99

Get the full article by email and feel free to reply if you want to discuss it further.

Visa Mastercard Apple Pay Google Pay

Sources

Daniel Kahneman, Nobel Prize in Economics 2002, for research on cognitive biases: Nobel Prize

Summary

Our evolutionary instincts — the need for calories, belonging, safety — once kept us alive. Today, they are systematically exploited. Food companies engineer addiction. Algorithms reward outrage over truth. Financial systems profit from our inability to assess risk. This article maps the specific mechanisms through which modern power structures turn human nature against us. Not to spread fear — but to make these patterns visible.

Common questions on this article's topic

Why are humans so vulnerable to manipulation in the modern world?
Our brains evolved over hundreds of thousands of years in conditions dramatically different from today. Instincts designed for small-group survival — seeking calorie-dense food, reacting quickly to threats, staying close to the group — once kept us alive. But in a world of infinite information, algorithmic content, and engineered stimuli, these same mechanisms become vulnerabilities. The threat is no longer a predator. It is a headline, a notification, or a message designed to trigger fear.
What is confirmation bias and how do algorithms exploit it?
Confirmation bias is the tendency to seek out information that confirms what we already believe and dismiss what challenges it. It was formally identified by psychologist Peter Wason in 1960 and is one of the most thoroughly studied cognitive biases. In digital environments, algorithms amplify this effect by showing us content similar to what we have already engaged with. Our view is constantly reinforced — not because it is more true, but because it is more frequently repeated.
What is the anchoring effect and how is it used in manipulation?
Anchoring is a cognitive bias documented by Tversky and Kahneman in 1974 in which the first piece of information we receive sets a reference frame for all subsequent judgments. If a product is presented as originally costing 1,000 euros and now costs 500, the mind evaluates 500 as a bargain — even if the real value is 300. In politics and media, whoever sets the first anchor often wins the interpretation of an event.
Why do repeated lies eventually start to feel true?
Because of the illusory truth effect, first documented in 1977 by Hasher, Goldstein, and Toppino. The mechanism is simple: hearing something repeatedly makes it feel more credible — not because we have verified it, but because it sounds familiar. Familiar things feel safe, and safe things feel true. If an idea is repeated long enough, it can become common sense even if it was originally false. This is why repetition is the most basic tool in advertising, political propaganda, and disinformation campaigns.
What is astroturfing and how does it create false consensus?
Astroturfing is the creation of an illusion of spontaneous public support through coordinated accounts, organised comments, or paid campaigns designed to look like natural opinion. When we see that many people appear to think something, we tend to conform — a phenomenon known as social proof. But when the majority is manufactured, our decisions rest on an illusion. Research has found that roughly 20% of global Twitter trends in 2019 were artificially created.
How can someone make you believe you made a free choice when you did not?
Through a strategy called reflexive control, originating with Soviet mathematician Vladimir Lefebvre in the 1960s. Rather than forcing a decision, the environment is configured so the person decides on their own — but exactly as intended. This is achieved through selective information, controlled framing, and strategic timing. The person feels they made a free choice, but the conditions under which they chose were deliberately shaped. In the article, this is described as one of the most sophisticated forms of manipulation.
Richard Golian

If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.

NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.

Related articles

What AI Hides From You

Before you can teach AI to understand anything, you need to see what it is hiding from you.

11 April 2026·668 reads
Manipulation of Attention Through Outrage, Anger and Fear

It happens every day. It is happening right now.

23 March 2026·1 075 reads
Robert Fico Forgets When and How the Second World War Began

How can we expect to come to terms with history or learn from it if we do that?

7 May 2025·1 857 reads

More articles

Where the Money Goes When AI Takes the Work

Prague, 13 May 2026. On my way to work I started thinking about something that stayed with me for days. If most routine work on a computer disappears in the next ten years, and a large share of repetitive manual work disappears with it, what happens to the flow of money? Who pays whom for what? Which economic layers will exist, how large will they be, and what relationships will run between them? This is the six-layer map I sketched as an answer.

15 May 2026·40 reads
Building an AI Stock Market Prediction System That Grades Itself

I am building an AI system to predict the S&P 500. It runs on my own machine, uses free public data — yfinance, FRED, the Shiller dataset — and grades every forecast against reality. This series documents the build itself: the decisions, the methodology, the mistakes. What I will eventually share from the running system is a separate question, and an honest one.

26 April 2026·612 reads
AI sales forecast: 9 traps so far

Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.

25 April 2026·582 reads
Will AI take my job?

Will AI take my job? A certified Google trainer told me in June 2024 that my profession would cease to exist. Twenty-two months later, my job title has not changed — but ninety percent of what I do during the day is different. I have delegated more of my thinking to AI agents than I thought possible. I am not afraid. This is why, and what it means for anyone asking the same question.

23 April 2026·364 reads
€50,000 Quote vs. Two Hours with Claude Code

One hour. Fifty-five minutes. That is how long it took to build what a Czech software firm had quoted at over €50,000. I built it with Claude Code. Not a prototype. Not a proof of concept. A working tool — the one the company actually needed. By the evening of the same day, it was running on staging. This is not about Claude Code. It is about what Claude Code exposes.

18 April 2026·720 reads
Is AI Making Us Dumber?

I have conducted roughly one hundred and fifty practical interviews over the past four years. Fifty for data specialist roles. A hundred for advertising and performance marketing specialists. Almost every one of them involved sitting down with a candidate over a practical task — something close to a real problem we actually need to solve at the company. Not theory. Not trivia. Applied problem-solving. Over time, I started noticing a pattern.

14 April 2026·671 reads
When Your AI Agent Joins the Team

The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.

8 April 2026·825 reads
Training an AI Agent That Learns Between Sessions

I wanted to build an agent that doesn't just assist. One that acts.

4 April 2026·875 reads
Local AI Model Limitations: Why I Switched from Ollama to Claude for Autonomous Agents

This is what I learned about local vs cloud AI, and why I switched to Claude Code.

3 April 2026·1 476 reads
Full AI agents or fully offline.

Four days in Catalonia. No computer, no AI, almost no social media. I bought this notebook so that I could write down what I would think about, and what I would come across and learn on the trip.

10.5.2026·322 reads
NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.