Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

Castellano Français Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox

Teaching AI to Understand — Richard Golian

Teaching AI to Understand

A significant part of my university studies was devoted to the structure of understanding — how meaning is formed before we consciously process it, how interpretation is always grounded in what we already know, and how attunement shapes what we see before any argument is made. In my master's thesis at Charles University, I examined these structures in Roman rhetorical texts by Quintilian, Cicero, and the author of Rhetorica ad Herennium. Today I apply these insights to training AI agents. How you structure context, what fore-understanding you give a model, how you handle its limitations — these are not just technical questions. They are questions about the nature of understanding itself. #artificialintelligence #cognition #data

5 articles

How do you teach a machine to understand? Not just to process tokens, but to interpret meaning in context? A significant part of my university studies at Charles University focused on exactly this question — studying the structure of understanding and examining it in Roman rhetorical texts by Quintilian, Cicero, and the author of Rhetorica ad Herennium. Working with Heidegger's phenomenology — particularly the analysis of interpretation in Being and Time — provided a framework for examining how fore-structures, attunement, and the concealment of skill shape the way meaning is formed. These are the same dynamics that become visible when you train AI: what context you provide, what assumptions you build in, and what you leave implicit. These articles document that process — from building persistent memory and self-correction loops to the deeper question of how understanding itself works, and what that means for artificial intelligence.

Common questions about this topic?

What is the structure of understanding and what does it have to do with AI?
The structure of understanding describes how interpretation works — what shapes meaning before we even begin to think consciously. Every interpretation is grounded in fore-understanding, attunement, and available concepts. AI lacks these structures entirely — it has no fore-understanding unless you provide it. That is why how you structure context, set up system prompts, and design memory fundamentally changes what an AI produces. Understanding these structures gives you an advantage most people working with AI do not have.
Can studying phenomenology make you better at training AI?
Yes. Phenomenology studies how meaning and interpretation work at a fundamental level. This changes how you structure prompts, design memory systems, evaluate outputs, and recognize AI limitations. It is the difference between using AI as a tool and understanding why it produces what it produces.
What is persistent memory in AI agents?
Persistent memory allows an AI agent to retain knowledge between sessions — errors it made, rules it learned, context it accumulated. Without structured memory, every session starts from zero. Designing effective memory systems is one of the key challenges in building AI agents that genuinely improve over time.
Who should control what an AI agent learns?
When an AI agent moves from a personal tool to a team resource, the question of who can shape its knowledge becomes critical. Uncontrolled access to an agent's memory can contaminate its learned behaviors. Role-based access control is not just about permissions — it is about governance over institutional knowledge.