One thing that the discussions I’m seeing on security, encryption, and backdoors is that it reminds me so much of what resilience and safety engineering went through a few decades ago

“Zero incidents” doesn’t work as a mindset in resilience *or* security. So the question for me is what are the arguments we can make and what are the tools we can build that enable this mindset. Rather than saying “no, it doesn’t work like that. go away” can we say “no, it doesn’t work like that, but here’s what *does*”?

#FOSDEM #FOSDEM2026

@hazelweakly
Huh, can you expand? Security has been pretty solidly in "incidents happen, design for response" for a long time. Part of that means moving to tooling that's less likely to cause vulns, but that's only ever one part of the story. Intentional backdoors, whether it's states breaking crypto or malicious takeovers of libraries are a somewhat different category with different responses.

@dymaxion The emerging parts of security, yes, but it evolves pretty slowly. The intersection of security and regulation and compliance still seems to mostly result in very compartmentalised type of solutions. Things like “put the solution into a neat little categorified box”, or “identify X risk vectors and then addressing them solves the issue”

But what ends up happening is that if someone works somewhere that has to sit on the frontier edge of compliance and regulation, then this comes up fairly regularly

@dymaxion In this case, the mindset of “classification of items and assessing risk via component analysis” seems to consistently result in the conclusion being “if nation states could analyse all communications everywhere then they can maximally assess risk”.

In other words, it sounds like the closed world hypothesis of total information implying perfect control of the environment. Type 1 safety, “zero incidents”, and other mindsets all share that in common.

I’m not sure we know how to write effective policy or regulations in a way that *doesnt* imply this outcome, because I continue to consistently see it. Hence the question of how do we teach effective understanding around complex systems to these groups. How can we respond with “yes, and” rather than attempting to shut down the conversation by stating that the approach they’re attempting isn’t possible, without offering an alternative that’s more effective?

@hazelweakly
The national security thing is I think different. The core goal of the state is to survive, and the primary survival tool of the state is control, so states always want to control everything they can control that could impact their survival. So the driver for universal surveillance isn't that it's going to improve state security, it's that universal surveillance is now possible. If in thirty years we end up with brain implants becoming common, then in forty years we're going to be having a debate about whether freedom of thought is compatible with state security, and the answer of the state, sooner or later, is going to be no.

This calculus means that it doesn't really matter if new surveillance is going to work, let alone be efficient. Just as many companies try to do quantitative security tracking when they don't and likely never will have quantitatively meaningful data, because governance is supposed to be about risk and that means we have to have numbers, so by god numbers we will have, the state does the same. Better yet, the state gets to never actually tell you what the numbers are. "Critical for national security" is a magical formula, not an analytic outcome.

@dymaxion
Does this make you a supporter of Georgism, anarchy, communism or some techno quasi breakthrough of bottom up decentralized communities defended through cryptography?

@hazelweakly

@unqualifiedtechbros
I generally like to say an optimistic realist.
@hazelweakly