Software Engineering

Governing AI coding agents: access control, auditability, and security

Establish infrastructure-level guardrails for AI coding agents: centralize LLM access and credential management, log all prompts and tool calls for auditability, restrict network egress to prevent prompt injection and data exfiltration, and run agents in governed cloud environments with controlled tooling and audit trails. Layering these controls makes AI agent activity transparent, attributable, and safe at scale.

Why the human is still essential here

Security and engineering teams define access policies, review audit logs, investigate incidents, and make risk decisions; humans stay accountable for all governance configuration, exception handling, and final policy enforcement.

How people use this

Ephemeral AI-ready dev workspaces

Provision ephemeral, policy-controlled cloud workspaces with preinstalled build tools and an AI agent so all coding happens inside governed infrastructure rather than on developer laptops.

Coder / Terraform / Kubernetes

Managed secrets for agent sessions

Inject short-lived credentials into the remote workspace at runtime so agents can access required services without storing API keys in local .env files.

HashiCorp Vault / Coder

Standardized remote IDE and agent setup

Have developers and agents connect to the same remote workspace through approved IDEs to ensure consistent tooling, logging, and access controls.

VS Code Remote - SSH / JetBrains Gateway / Coder

Related Prompts (4)

Community stories (1)

X

I use AI coding agents every day.

I use AI coding agents every day.

And the biggest problem isn't the AI itself. It's everything around it.


API keys sitting in .env files on my laptop. Agents that can access anything on my machine. And if something breaks, good luck figuring out what the agent actually did.


For side projects? Sure, whatever.


For a team of hundreds of engineers? That's a problem.


Here's what I think the right approach looks like:


AI agents should run in cloud environments. Not on developer laptops. They need clear rules about what they can and can't access. And every action should be tracked.


That's what ๐—–๐—ผ๐—ฑ๐—ฒ๐—ฟ does.


It's an open-source platform for self-hosted development environments. You define everything with Terraform, and both devs and AI agents work inside the same governed workspaces.


Three things stood out to me:


- ๐—”๐—œ ๐—•๐—ฟ๐—ถ๐—ฑ๐—ด๐—ฒ. It's a gateway between your agents and the LLM providers. Instead of every developer managing their own API keys, auth is handled centrally. Every prompt and tool call is logged per user.


- ๐—”๐—ด๐—ฒ๐—ป๐˜ ๐—•๐—ผ๐˜‚๐—ป๐—ฑ๐—ฎ๐—ฟ๐—ถ๐—ฒ๐˜€. Think of it as a firewall for AI agents. You define which domains an agent can reach, and it blocks everything else. If a prompt injection tricks the agent into sending data to a bad server, the request gets blocked before it ever leaves.


- ๐—–๐—ผ๐—ฑ๐—ฒ๐—ฟ ๐—ง๐—ฎ๐˜€๐—ธ๐˜€. You label a GitHub issue, a background agent picks it up, works through it, and opens a PR. You step in to review when it's done.


It runs on Kubernetes, any cloud, or even on-premise.


Works with VS Code, JetBrains, Cursor, and any AI coding agent you want.


If you're trying to figure out how to let your team use AI agents without creating a security nightmare, check out Coder Workspaces, the free community edition: https://fandf.co/4qUSmXf


Companies like Square, Dropbox, and Goldman Sachs already use it.


Thanks to @coderhq for collaborating with me on this post.

MJ
Milan Jovanoviฤ‡.NET content creator
Mar 9, 2026
Governing AI coding agents: access control, auditability, and security - People Use AI