I use AI coding agents every day.
I use AI coding agents every day.
And the biggest problem isn't the AI itself. It's everything around it.
API keys sitting in .env files on my laptop. Agents that can access anything on my machine. And if something breaks, good luck figuring out what the agent actually did.
For side projects? Sure, whatever.
For a team of hundreds of engineers? That's a problem.
Here's what I think the right approach looks like:
AI agents should run in cloud environments. Not on developer laptops. They need clear rules about what they can and can't access. And every action should be tracked.
That's what ๐๐ผ๐ฑ๐ฒ๐ฟ does.
It's an open-source platform for self-hosted development environments. You define everything with Terraform, and both devs and AI agents work inside the same governed workspaces.
Three things stood out to me:
- ๐๐ ๐๐ฟ๐ถ๐ฑ๐ด๐ฒ. It's a gateway between your agents and the LLM providers. Instead of every developer managing their own API keys, auth is handled centrally. Every prompt and tool call is logged per user.
- ๐๐ด๐ฒ๐ป๐ ๐๐ผ๐๐ป๐ฑ๐ฎ๐ฟ๐ถ๐ฒ๐. Think of it as a firewall for AI agents. You define which domains an agent can reach, and it blocks everything else. If a prompt injection tricks the agent into sending data to a bad server, the request gets blocked before it ever leaves.
- ๐๐ผ๐ฑ๐ฒ๐ฟ ๐ง๐ฎ๐๐ธ๐. You label a GitHub issue, a background agent picks it up, works through it, and opens a PR. You step in to review when it's done.
It runs on Kubernetes, any cloud, or even on-premise.
Works with VS Code, JetBrains, Cursor, and any AI coding agent you want.
If you're trying to figure out how to let your team use AI agents without creating a security nightmare, check out Coder Workspaces, the free community edition: https://fandf.co/4qUSmXf
Companies like Square, Dropbox, and Goldman Sachs already use it.
Thanks to @coderhq for collaborating with me on this post.