The European Parliament restricting institutional AI tools after a Microsoft Copilot incident is a small headline with a large signal underneath it.
For the last two years the default story was that hosted AI would become universal infrastructure — as quietly embedded as email. The counter-story is now visibly forming: in regulated and sensitive environments, hosted AI is being reclassified from "productivity upgrade" to "operational liability." That reclassification happens once, and then it propagates. Parliaments, then ministries, then regulated industries that take their cue from them.
The question for any organisation handling sensitive material is no longer "is this AI tool useful?" It's "can we defend using it the day something leaks?" The answer, for a growing number of functions, is no.
Local inference side-steps that question entirely. The data never reaches a vendor's logs. There is nothing to disclose, because nothing left the premises.