AI Agents That Run Where Your Data Lives
There’s a pattern I keep seeing in enterprise AI conversations. Someone shows an impressive demo — an agent that analyses logs, responds to incidents, or automates a workflow. The room is excited. Then someone asks: “Where does this run? And where does our data go?”
And the room goes quiet.
Here’s the thing most people get wrong about AI agents and data sovereignty: the question isn’t whether the LLM runs locally. The question is where the agent operates — where your data is processed, where decisions are made, where actions are taken, and who controls all of that.
An AI agent can talk to an external model. That’s fine. What matters is that the agent itself — the thing that reads your logs, touches your infrastructure, accesses your customer data — runs inside your environment, under your governance, with your permissions.
Think about it like a consultant. You might hire someone with expertise that lives outside your company. But you don’t send them all your data and let them work from their own office with no oversight. They come to you. They work in your systems. They follow your rules. And you know exactly what they did when they leave.
That’s the model we built Klaus around.
Klaus deploys AI agents into your Kubernetes clusters. The agents run continuously — analysing, responding, managing — on your infrastructure. They can use external LLMs (Claude, GPT, open-source models, whatever fits your requirements), but the agent itself lives where your data lives. Your data doesn’t get shipped somewhere else for processing. The actions happen locally. The audit trail is yours.
If you’ve used tools like Claude Code, Klaus is what that concept looks like when it’s deployed into enterprise infrastructure — running continuously, governed, and under your operational control.
The name is German — because we are. (It’s a hard K, by the way.)
This distinction — between where the model runs and where the agent operates — matters more than most vendors want to admit. A lot of AI tooling today bundles everything into a hosted service: the model, the orchestration, the data access, the actions. That’s convenient. It’s also a non-starter for enterprises with compliance requirements, data residency obligations, or simply a security posture that doesn’t allow sensitive data to leave their perimeter.
With Klaus, you choose the model. You control the data. The agent runs in your cluster. And if you ever want to run a local LLM as well — because your compliance requirements demand it, or because you want to reduce external dependencies — that path is open too.
Here’s what Klaus is not: a model provider. We don’t build foundation models. We don’t compete with OpenAI or Anthropic on that level. What we build is the operational layer — the part that makes it possible to run AI agents reliably, continuously, and under your control.
There’s a gap in the market that’s hard to see if you’re focused on model capabilities: the gap between “it worked in the demo” and “it runs reliably at 3am without anyone watching.” That gap is operations. And it’s exactly the gap Giant Swarm has always filled — first for Kubernetes, now for AI workloads.
Klaus doesn’t make AI magical. It makes AI operational. And in enterprise environments, that’s the harder problem.
If you’re exploring how to run AI agents on your own infrastructure — with the governance and reliability your organisation requires — I’d be happy to walk you through what we’ve built.


