Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

Castellano Français Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox

Article

When Your AI Agent Joins the Team

AI agent access control, shared memory, and role-based permissions
Richard Golian
Richard Golian · 825 reads
Hi, I am Richard. On this blog, I share thoughts, personal stories — and what I am working on. I hope this article brings you some value.

From learning to access control

In my previous article, I described building an AI agent that learns between sessions. An agent with structured memory and a self-correction loop. A system for accumulating experience across runs.

The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.

What happens when your team gets access to an AI agent

Most articles about AI agents focus on what the agent can do. Very few talk about what happens when someone other than the builder starts using it.

I built a Slack bot. The idea was simple: give the team a way to interact with the agent directly — ask questions, request analyses, get reports. No terminal. No code. Just Slack.

It worked immediately. And that is when the real problem appeared.

Free chat with an AI agent is a powerful interface. It is also a risk. If anyone on the team can write anything to the agent, anyone can accidentally overwrite its memory or change its behaviour. They can trigger actions that were not intended. The agent does not judge authority. It processes input.

The question was not whether the team should have access. The question was: what kind of access?

Role-based access control for AI agents

I ended up with a role system. Three levels: admin, analyst, viewer.

The viewer can read reports and see what the agent produces. Nothing more. No commands, no chat, no influence on behaviour.

The analyst can do more. They can ask questions. They can run predefined commands. And — this is the important part — they can write to the agent's memory. But only through an explicit command, not through free conversation. If an analyst types a memory instruction in the right format, the agent saves it. If they try to write it as a casual message, the system ignores it.

The admin has unrestricted access. Free chat, direct commands, memory writes, configuration changes.

This sounds like a standard permission model. But the distinction that matters is not who can read or write. It is who can teach. Because every memory entry changes what the agent knows. And what the agent knows shapes every future output.

Why AI agent memory becomes a shared knowledge base

This is something I did not fully appreciate until I saw it in practice.

In the previous article, I described the structured memory layer — a file the agent reads before each run, containing lessons from past sessions. What I did not say is what happens when that memory becomes shared.

The moment multiple people contribute to the agent's memory, it stops being a personal tool. It becomes a shared knowledge base. Every entry affects every future session — not just for the person who wrote it, but for everyone who interacts with the agent.

Uncontrolled access to that memory is a real risk. Not because people have bad intentions. But because the agent does not distinguish between a well-considered methodological insight and a casual remark someone typed without thinking. It treats both as equal truths.

Command-only memory access for analysts was the compromise. You can contribute. But you do it deliberately, in a structured format, and it is logged.

What happens when the agent gets the methodology wrong

This one caught me off guard.

Continue

Join the Library

Full access to my thoughts, personal stories, findings, and what I learn from the people I meet.

Join the Library — €29.99 per year
Or just this article · €2,99

Get the full article by email and feel free to reply if you want to discuss it further.

Visa Mastercard Apple Pay Google Pay

Summary

I built an AI agent with persistent memory and gave my team access through Slack. What followed was an unexpected lesson: the hardest part of deploying an AI agent is not making it smart — it is deciding who gets to teach it. The questions that mattered were about access, shared memory, and what happens when the agent gets the methodology wrong.

Common questions on this article's topic

What is role-based access control for AI agents?
Role-based access control (RBAC) for AI agents means assigning different permission levels to team members based on their role. In this article, three levels are described: viewer (read-only access to reports), analyst (can ask questions, run commands, and write to agent memory through explicit commands only), and admin (unrestricted access including free chat and configuration changes). The key distinction is not who can use the agent, but who can teach it — because memory entries permanently shape all future outputs.
Why is AI agent memory a risk when shared across a team?
When multiple people can write to an AI agent's memory, it becomes a shared knowledge base. Every entry affects every future session for everyone. The risk is that the agent treats all memory entries as equal truths — it cannot distinguish between a carefully considered methodological insight and a casual remark typed without thinking. Without access controls, one careless message can silently change the agent's behaviour for the entire team.
Can AI agents get trapped in logical errors?
Yes. In the article, an agent produced a confident result that was off by an order of magnitude. When corrected, it accepted the new framing but reproduced the same structural error inside it — cycling through five rounds of corrections before humans identified the root flaw. AI agents accept your framing and work within it, but they can get trapped cycling through variations of the same mistake without recognising the underlying problem.
What is the difference between an AI agent correcting itself and actually learning?
Self-correction happens within a single session — the agent catches errors against a fixed schema. Learning happens across sessions — each run leaves a trace that the next session can use. In this article, the agent learned when a methodological disagreement was resolved by humans and the correct method was saved to memory. The agent didn't discover the fix itself; humans did. But the memory ensured the lesson carried forward to every future session.
How do you deploy an AI agent for a team safely?
The article describes a practical approach: build a Slack bot interface, implement a role system (admin, analyst, viewer), and restrict memory writes to explicit structured commands rather than free conversation. The key insight is that giving a team access to an AI agent is not just a technical deployment — it's an organisational decision about who gets to shape what the agent knows.
What is the biggest challenge when scaling AI agents from solo use to team use?
The biggest challenge is not technical — it's organisational. When only the builder uses the agent, memory and behaviour remain consistent. When a team gets access, anyone can potentially change what the agent knows, introduce incorrect methodologies, or overwrite established patterns. The article argues that the access control layer — who can interact with the agent and how — is just as important as the learning mechanism itself.
Richard Golian

If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.

NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.

Related articles

Building an AI Stock Market Prediction System That Grades Itself

I am building an AI system to predict the S&P 500. It runs on my own machine, uses free public data — yfinance, FRED, the Shiller dataset — and grades every forecast against reality. This series documents the build itself: the decisions, the methodology, the mistakes. What I will eventually share from the running system is a separate question, and an honest one.

26 April 2026·612 reads
AI sales forecast: 9 traps so far

Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.

25 April 2026·582 reads
What AI Hides From You

Before you can teach AI to understand anything, you need to see what it is hiding from you.

11 April 2026·668 reads

More articles

Where the Money Goes When AI Takes the Work

Prague, 13 May 2026. On my way to work I started thinking about something that stayed with me for days. If most routine work on a computer disappears in the next ten years, and a large share of repetitive manual work disappears with it, what happens to the flow of money? Who pays whom for what? Which economic layers will exist, how large will they be, and what relationships will run between them? This is the six-layer map I sketched as an answer.

15 May 2026·40 reads
Will AI take my job?

Will AI take my job? A certified Google trainer told me in June 2024 that my profession would cease to exist. Twenty-two months later, my job title has not changed — but ninety percent of what I do during the day is different. I have delegated more of my thinking to AI agents than I thought possible. I am not afraid. This is why, and what it means for anyone asking the same question.

23 April 2026·364 reads
€50,000 Quote vs. Two Hours with Claude Code

One hour. Fifty-five minutes. That is how long it took to build what a Czech software firm had quoted at over €50,000. I built it with Claude Code. Not a prototype. Not a proof of concept. A working tool — the one the company actually needed. By the evening of the same day, it was running on staging. This is not about Claude Code. It is about what Claude Code exposes.

18 April 2026·719 reads
Is AI Making Us Dumber?

I have conducted roughly one hundred and fifty practical interviews over the past four years. Fifty for data specialist roles. A hundred for advertising and performance marketing specialists. Almost every one of them involved sitting down with a candidate over a practical task — something close to a real problem we actually need to solve at the company. Not theory. Not trivia. Applied problem-solving. Over time, I started noticing a pattern.

14 April 2026·671 reads
Training an AI Agent That Learns Between Sessions

I wanted to build an agent that doesn't just assist. One that acts.

4 April 2026·875 reads
Local AI Model Limitations: Why I Switched from Ollama to Claude for Autonomous Agents

This is what I learned about local vs cloud AI, and why I switched to Claude Code.

3 April 2026·1 475 reads
Slovakia's Economy in 2026

What happened — and how can it be reversed?

28 March 2026·1 339 reads
Full AI agents or fully offline.

Four days in Catalonia. No computer, no AI, almost no social media. I bought this notebook so that I could write down what I would think about, and what I would come across and learn on the trip.

10.5.2026·322 reads
NEWSLETTER
What I write about, what I am working on, what I learned.
Sent the first Sunday of the month. Unsubscribe anytime.