One thing that the discussions I’m seeing on security, encryption, and backdoors is that it reminds me so much of what resilience and safety engineering went through a few decades ago

“Zero incidents” doesn’t work as a mindset in resilience *or* security. So the question for me is what are the arguments we can make and what are the tools we can build that enable this mindset. Rather than saying “no, it doesn’t work like that. go away” can we say “no, it doesn’t work like that, but here’s what *does*”?

#FOSDEM #FOSDEM2026

@hazelweakly
Huh, can you expand? Security has been pretty solidly in "incidents happen, design for response" for a long time. Part of that means moving to tooling that's less likely to cause vulns, but that's only ever one part of the story. Intentional backdoors, whether it's states breaking crypto or malicious takeovers of libraries are a somewhat different category with different responses.

@dymaxion The emerging parts of security, yes, but it evolves pretty slowly. The intersection of security and regulation and compliance still seems to mostly result in very compartmentalised type of solutions. Things like “put the solution into a neat little categorified box”, or “identify X risk vectors and then addressing them solves the issue”

But what ends up happening is that if someone works somewhere that has to sit on the frontier edge of compliance and regulation, then this comes up fairly regularly

@dymaxion In this case, the mindset of “classification of items and assessing risk via component analysis” seems to consistently result in the conclusion being “if nation states could analyse all communications everywhere then they can maximally assess risk”.

In other words, it sounds like the closed world hypothesis of total information implying perfect control of the environment. Type 1 safety, “zero incidents”, and other mindsets all share that in common.

I’m not sure we know how to write effective policy or regulations in a way that *doesnt* imply this outcome, because I continue to consistently see it. Hence the question of how do we teach effective understanding around complex systems to these groups. How can we respond with “yes, and” rather than attempting to shut down the conversation by stating that the approach they’re attempting isn’t possible, without offering an alternative that’s more effective?

@hazelweakly
Ah, yeah — security governance and compliance absolutely have those mindsets. They're obviously not exactly separate from security engineering in practice, but if security engineering isn't what's driving security practice the result will not be a secure system but a compliant one. This doesn't mean that governance, compliance, and regulation are bad, but they have to be engineering tools more than management tools — a way to drive specific engineering outcomes. If they're serving two masters, they don't really work.

From an eng management perspective, there really is not yet any substitute for experienced security engineers being deeply involved in system design and development from day one. If you have enough time and money, you can build up tooling and guardrails and rebuild your way out of it, but that's exponentially more expensive for a given degree of (unmeasurable until you're at the top of the chart) security quality. This applies equally to non-dev security practices.

@hazelweakly
I'm in the early stages of tinkering with what may end up being a high security-assurance system, in the sort of space where the answer to "we can't audit all these third party libraries" is "them we won't use anything we don't have the capacity to audit both initially and for all changes, ongoing". This basically gives you a closed world, but at a notable level of overhead. Even in this context, the trade-offs are still there — if it's the mythical security vs usability trade-off, here security wins every time, and also if it's not used it doesn't matter. That structure only works because of a relatively unique set of incentives.
@dymaxion @hazelweakly Sounds a little like the initial proposition behind https://oxide.computer ?
Oxide Computer Company

The cloud you own. Hardware, with the software baked in, for running infrastructure at scale.

@JeremyMcGee
@hazelweakly It's definitely a very good move in the right direction, yes.

@hazelweakly
In the more general case, I think the only answer is a combination of liability and public funding. If companies start to be liable for direct customer damages, the incentive structure for building systems changes a lot. Then, public funding that treats open source software as a public good and pays for maintenance and audits of critical tools, along with specific funding of tooling to make it easier to build secure systems, including, as stuff reaches some level of community acceptance, funding to move existing systems over to the new tooling.

We see little baby efforts from the EU for this, but relatively little on the ops-ish side. The hyperscalers have shaped the way we think about operating systems, but the public products they provide are designed with making money as a higher priority than making it easier to operate secure systems — and they've starved the options for folks not operating in their clouds, because raising the bar to going on prem is a core part of their business model. There's a huge opportunity for public funding to change that, especially in the light of the digital sovereignty conversation.

In the end though, there's no way to do this without companies accepting that they're not going to be able to write as much code. But if your business model only works because you're polluting the world with negative security outcomes that you treat as an externality, then your business model shouldn't work.

@hazelweakly
The national security thing is I think different. The core goal of the state is to survive, and the primary survival tool of the state is control, so states always want to control everything they can control that could impact their survival. So the driver for universal surveillance isn't that it's going to improve state security, it's that universal surveillance is now possible. If in thirty years we end up with brain implants becoming common, then in forty years we're going to be having a debate about whether freedom of thought is compatible with state security, and the answer of the state, sooner or later, is going to be no.

This calculus means that it doesn't really matter if new surveillance is going to work, let alone be efficient. Just as many companies try to do quantitative security tracking when they don't and likely never will have quantitatively meaningful data, because governance is supposed to be about risk and that means we have to have numbers, so by god numbers we will have, the state does the same. Better yet, the state gets to never actually tell you what the numbers are. "Critical for national security" is a magical formula, not an analytic outcome.

Totally agreed. And not just at the national security level, we see the same thing with discussions at the local level of surveillance systems like Flock and ShotSpotter.

@dymaxion @hazelweakly

@jdp23
@hazelweakly Yeah — sub-parts of the state have the same core goal; corporations are semi-statelike — same strategy but the goal is profit not survival. And these are limit tendencies, to be sure — actually existing states and companies are run by real humans and have long path-dependent lives that can mitigate or exacerbate these tendencies. The humans cannot, however change the fundamental nature of the institution. An entity that is defined by centralized power and a claim to monopoly on violence can't ever be anything else.

The legislature's working on a Flock/ALPR regulation bill (these systems really need to be abolished but there aren't the votes for that) and law enforcement is making the usual outrageous claims of how effective and important these systems are. In a planning meeting somebody suggested that since they're almost certainly lying, we should try to get the actual data and discredit them. It's certainly worth doing, but as somebody on the call pointed out, Stop Surveillance City did that very effectively when Seattle wanted to expand their Axon ALPR usage ... and the city expanded it anyhow.

If it's strong enough, regulation can be harm reduction, at least to some extent. But if it's not strong enough, then it just legitimizes and sanctions the abuses. It's still too early to know how this bill will turn out, the initial version wasn't strong enough and it got further weakened in the Senate committee, but we'll have chances to improve it in the House. We shall see.

https://pnw.zone/@waprivacy/116002121033299203

@dymaxion @hazelweakly

@dymaxion @jdp23 @hazelweakly
Even in corporations, survival often comes above profit
@sabik
@jdp23 @hazelweakly Often, yes, but private equity says less often, these days.
@dymaxion @jdp23 @hazelweakly
Private equity is more like predation, really; it's survival of the PE firm at that point, not the target

@dymaxion @jdp23 @hazelweakly
I was thinking more in terms of, there are multiple documented ways to get more work from workers, which corporations often forgo in favour of (perceived) control

Things like indoor air quality and private offices and reasonable working hours

@dymaxion
Does this make you a supporter of Georgism, anarchy, communism or some techno quasi breakthrough of bottom up decentralized communities defended through cryptography?

@hazelweakly

@unqualifiedtechbros
I generally like to say an optimistic realist.
@hazelweakly