Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

Castellano Français Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox

Search

#teachingaitounderstand

Teaching AI to Understand

6 articles

What AI Hides: System Prompts, Hallucinations, Alignment and the Concealment Built Into Every Model

Before you can teach AI to understand anything, you need to see what it is hiding from you.

11 April 2026·667 reads
When Your AI Agent Joins the Team — Who Gets to Teach It?

The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it...

8 April 2026·824 reads
Training an AI Agent That Learns Between Sessions

I wanted to build an agent that doesn't just assist. One that acts...

4 April 2026·875 reads
Local AI Model Limitations: Why I Switched from Ollama to Claude for Autonomous Agents

This is what I learned about local vs cloud AI, and why I switched to Claude Code...

3 April 2026·1 475 reads

Common questions about this topic

What is the structure of understanding and what does it have to do with AI?
The structure of understanding describes how interpretation works — what shapes meaning before we even begin to think consciously. Every interpretation is grounded in fore-understanding, attunement, and available concepts. AI lacks these structures entirely — it has no fore-understanding unless you provide it. That is why how you structure context, set up system prompts, and design memory fundamentally changes what an AI produces. Understanding these structures gives you an advantage most people working with AI do not have.
Can studying phenomenology make you better at training AI?
Yes. Phenomenology studies how meaning and interpretation work at a fundamental level. This changes how you structure prompts, design memory systems, evaluate outputs, and recognize AI limitations. It is the difference between using AI as a tool and understanding why it produces what it produces.
What is persistent memory in AI agents?
Persistent memory allows an AI agent to retain knowledge between sessions — errors it made, rules it learned, context it accumulated. Without structured memory, every session starts from zero. Designing effective memory systems is one of the key challenges in building AI agents that genuinely improve over time.
Who should control what an AI agent learns?
When an AI agent moves from a personal tool to a team resource, the question of who can shape its knowledge becomes critical. Uncontrolled access to an agent's memory can contaminate its learned behaviors. Role-based access control is not just about permissions — it is about governance over institutional knowledge.
NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.