A single crafted prompt was enough to turn ChatGPT's own runtime into an outbound exfiltration channel. Entire conversations, attached files, and AI-generated summaries could be silently routed out over DNS — a channel most enterprise filters don't even inspect.
Check Point disclosed the flaw and OpenAI patched it. That's the happy ending. The uncomfortable part is the class of failure: once your working data lives inside someone else's runtime, every new feature (code execution, tool calls, browsing) becomes a new potential leak path. Each one needs independent auditing. Each one is outside your control.
This is the argument for local-first, compressed into one incident. If the model and the data never leave the boundary you operate, there is no outbound channel to abuse. Not "a stronger filter" — no channel. You cannot exfiltrate what never had a route out.
AvenBox runs the model on device. No telemetry, no runtime sandbox you don't control, no hidden channels being shipped to you under the label of a feature.