Blog

AI Agent, AI Spy – Signal Talk from the 39th Chaos Communication Congress

Written by Lisa LeVasseur
January 14, 2026

Once again, great minds at Signal strike at the heart of impending catastrophic collapse of privacy.

I love this talk from the 39th Chaos Communication Congress (December 2025) by Meredith Whittaker and Udbhav Tiwari so much. Here are my favorite things:

  1. It highlights the “down the stack” progression of unavoidable surveillance functionality into OS and hardware. The closer to the metal, the greater the data purview and potential risk. Meaning, surveillance at the hardware layer is able to surveil all users that use the device, as well as all of the things those users do on the machine. This is why governance needs to apply different duties onto different types of digital products and components.
  2. I also really like how Whittaker dives into what it is to be an “agent”, and agentic AI’s insatiable need for context. If the task scope is narrow, the context is narrow, but in the world of “robot butlers”, as Whittaker calls them, the context is broad, “everything about me” in order to perform a wide variety of tasks. Herein lies the need for unfettered surveillance. It’s staggering that we might consider ceding “everything about me” to commercial tech makers who have <checks notes> never acted in a trustworthy fashion and never will so long as digital product safety remains unregulated. Capitalism favors the manufacturer and exploits natural resources and humans, as both customers and laborers.
  3. We could and perhaps should rename “AI” to “amplified intensity” of digital product safety risks to humans, because it does so with alarming alacrity. In my Enigma talk last year, I described it as pouring gasoline on a privacy dumpster fire. This CCC talk concretizes just a few of the risks, with a special focus on the even more amplifying risk effect of the Model Context Protocol (MCP)—i.e. the lingua franca (or maybe more accurately, lingua francas[1]) for AI agents to talk to each other. I’m reminded of that old Faberge Organics shampoo commercial. What could possibly go wrong with unending autonomous communication between unknown third parties?

They highlight prompt injection attacks, noting that MCP “standardizes the exfiltration path for attackers.” Nifty.

  1. Whittaker clarifies the difference between deterministic software and probabilistic software, demonstrated in her explanation of the “The Mathematics of Failure”. When each step in a technology process chain behaves at even 95% accuracy, the down-stream 30-step outcome result is not 95% overall accuracy but a horrifying 21.4% likelihood of success (remember multiplying fractions?). Nearly every agentic task that will be created so we can enjoy our “robot butlers” will have at least thirty steps. Who on earth would back a product with such a poor accuracy outlook?
  2. Which leads us to the overinvestment/AI-hype situation we find ourselves in. With trillions upon trillions of dollars being invested into this technology (because apparently we’re too feeble to actually do Things; or we’re so amazing that our time needs to be spent on perfuming the world with our own special brand of greatness, pick your poison), there is literally no break-even point on the horizon. Once again, the Amplifying Intensity and impact of AI: too big to fail on steroids.
  3. They emphasize that there is not an obvious root fix, but they offer three “band-aids”:
    1. Stop reckless deployment. (I cannot believe we’re still in the move fast and break things epoch. Capitalism knows no shame.)
    2. Privacy by default. They phrase it as inverting the permission model from opt-out to opt-in [to surveillance]. Unfortunately, we have reified opt-out in law (CPRA, I’m looking at you). They’re right, of course. At Internet Safety Labs (ISL) we have made privacy by default a core principle for a digital product to be regarded as “safe”.
    3. Transparency. In the talk, they’re mainly focused on transparency of agent behavior, and once again, of course that’s necessary. Heck, our entire mission is built on the premise that transparency drives safer technology and manufacturer accountability. But I have two concerns about this particular transparency: (1) we know quite a lot about transparency at ISL (given our production of safety labels https://appmicroscope.org), and it seems that we might be careening inexorably towards a transparency deluge, the likes of which will make current privacy policies seem like, well, AI generated summaries. (2) Transparency isn’t going to be overly helpful in a world of unbounded, probabilistically behaving software agents.

When it comes to software: Complexity + Time + Probabilistic Behavior = Increasingly unknowable, unpredictable, chaos

We heard Facebook engineers admit five years ago that data flow was already unknowable for them—it was deterministic, but it wasn’t a closed system, ergo, unpredictable.

Which isn’t to say these band-aids aren’t valuable. They are. And there are other things we could do if we were serious about privacy, such as ban the selling or sharing for consideration of personally identifiable information. A person can dream.

Meanwhile, I count the world lucky to have people like Meredith and Udbhav calling out “AI” truths in a powerful, accurate, and highly understandable way.

[1] Francae? Plural. Because there is no world where a single one wins out. I hope.