One thing that the discussions I’m seeing on security, encryption, and backdoors is that it reminds me so much of what resilience and safety engineering went through a few decades ago

“Zero incidents” doesn’t work as a mindset in resilience *or* security. So the question for me is what are the arguments we can make and what are the tools we can build that enable this mindset. Rather than saying “no, it doesn’t work like that. go away” can we say “no, it doesn’t work like that, but here’s what *does*”?

#FOSDEM #FOSDEM2026

@hazelweakly
Huh, can you expand? Security has been pretty solidly in "incidents happen, design for response" for a long time. Part of that means moving to tooling that's less likely to cause vulns, but that's only ever one part of the story. Intentional backdoors, whether it's states breaking crypto or malicious takeovers of libraries are a somewhat different category with different responses.

@dymaxion The emerging parts of security, yes, but it evolves pretty slowly. The intersection of security and regulation and compliance still seems to mostly result in very compartmentalised type of solutions. Things like “put the solution into a neat little categorified box”, or “identify X risk vectors and then addressing them solves the issue”

But what ends up happening is that if someone works somewhere that has to sit on the frontier edge of compliance and regulation, then this comes up fairly regularly

@dymaxion In this case, the mindset of “classification of items and assessing risk via component analysis” seems to consistently result in the conclusion being “if nation states could analyse all communications everywhere then they can maximally assess risk”.

In other words, it sounds like the closed world hypothesis of total information implying perfect control of the environment. Type 1 safety, “zero incidents”, and other mindsets all share that in common.

I’m not sure we know how to write effective policy or regulations in a way that *doesnt* imply this outcome, because I continue to consistently see it. Hence the question of how do we teach effective understanding around complex systems to these groups. How can we respond with “yes, and” rather than attempting to shut down the conversation by stating that the approach they’re attempting isn’t possible, without offering an alternative that’s more effective?

@hazelweakly
In the more general case, I think the only answer is a combination of liability and public funding. If companies start to be liable for direct customer damages, the incentive structure for building systems changes a lot. Then, public funding that treats open source software as a public good and pays for maintenance and audits of critical tools, along with specific funding of tooling to make it easier to build secure systems, including, as stuff reaches some level of community acceptance, funding to move existing systems over to the new tooling.

We see little baby efforts from the EU for this, but relatively little on the ops-ish side. The hyperscalers have shaped the way we think about operating systems, but the public products they provide are designed with making money as a higher priority than making it easier to operate secure systems — and they've starved the options for folks not operating in their clouds, because raising the bar to going on prem is a core part of their business model. There's a huge opportunity for public funding to change that, especially in the light of the digital sovereignty conversation.

In the end though, there's no way to do this without companies accepting that they're not going to be able to write as much code. But if your business model only works because you're polluting the world with negative security outcomes that you treat as an externality, then your business model shouldn't work.