Hi, I am Richard. On this blog, I share thoughts, personal stories — and what I am working on. I hope this article brings you some value.
When Your AI Agent Joins the Team
AI agent access control, shared memory, and role-based permissions
From learning to access control
In my previous article, I described building an AI agent that learns between sessions. An agent with structured memory and a self-correction loop. A system for accumulating experience across runs.
The moment other people needed access to it, the problem changed completely. It was no longer about whether the agent could learn. It was about who gets to teach it.
What happens when your team gets access to an AI agent
Most articles about AI agents focus on what the agent can do. Very few talk about what happens when someone other than the builder starts using it.
I built a Slack bot. The idea was simple: give the team a way to interact with the agent directly — ask questions, request analyses, get reports. No terminal. No code. Just Slack.
It worked immediately. And that is when the real problem appeared.
Free chat with an AI agent is a powerful interface. It is also a risk. If anyone on the team can write anything to the agent, anyone can accidentally overwrite its memory or change its behaviour. They can trigger actions that were not intended. The agent does not judge authority. It processes input.
The question was not whether the team should have access. The question was: what kind of access?
Role-based access control for AI agents
I ended up with a role system. Three levels: admin, analyst, viewer.
The viewer can read reports and see what the agent produces. Nothing more. No commands, no chat, no influence on behaviour.
The analyst can do more. They can ask questions. They can run predefined commands. And — this is the important part — they can write to the agent's memory. But only through an explicit command, not through free conversation. If an analyst types a memory instruction in the right format, the agent saves it. If they try to write it as a casual message, the system ignores it.
The admin has unrestricted access. Free chat, direct commands, memory writes, configuration changes.
This sounds like a standard permission model. But the distinction that matters is not who can read or write. It is who can teach. Because every memory entry changes what the agent knows. And what the agent knows shapes every future output.
Why AI agent memory becomes a shared knowledge base
This is something I did not fully appreciate until I saw it in practice.
In the previous article, I described the structured memory layer — a file the agent reads before each run, containing lessons from past sessions. What I did not say is what happens when that memory becomes shared.
The moment multiple people contribute to the agent's memory, it stops being a personal tool. It becomes a shared knowledge base. Every entry affects every future session — not just for the person who wrote it, but for everyone who interacts with the agent.
Uncontrolled access to that memory is a real risk. Not because people have bad intentions. But because the agent does not distinguish between a well-considered methodological insight and a casual remark someone typed without thinking. It treats both as equal truths.
Command-only memory access for analysts was the compromise. You can contribute. But you do it deliberately, in a structured format, and it is logged.
What happens when the agent gets the methodology wrong
This one caught me off guard.
Full access to my thoughts, personal stories, findings, and what I learn from the people I meet.
Join the LibraryGet the full article by email and feel free to reply if you want to discuss it further.
Summary
If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.