Everything in modern computing seems driven by performance graphs for software (and firmware) that is full of security vulns, the theory being that this is okay because mitigations can get applied later before (too many) users are permanently harmed. Ideally minimal fixes that fix each individual bug as they are found, as narrowly as possible, thus not moving the benchmarks to maintain maximum performance and vulnerability.

Your computer is designed for harm and performance, not your safety, at this time.

You may think hey that is unfair it is not designed for harm. But there are choices made every day to not make the system safer, so it is a design choice.

Would you be ok with a bridge that was designed to fall apart slowly with a plan to continually patch it after anything broke, because this cost your govt less money? And then parts of the bridge roadway would fall off at times, maybe while people were driving on it. They would quickly repair those within a few days with a “fast patch” and there would be articles praising them for acting quickly to protect drivers from the holes in the bridge. But would that not be designing for harm? Because there are other choices that would avoid that, which we see in bridge building. But that is how computers currently look from inside security teams.

https://www.cisa.gov/cisa-director-easterly-remarks-carnegie-mellon-university

@blinkygal

There's a 1992 paper/keynote by Nancy Leveson [1] that I think about sometimes, in which she compares software development to steam engines. In the 19th century, people developed high-pressure steam engines despite not having the metallurgical, engineering and manufacturing process knowledge for how to make boilers that don't occasionally explode, killing people. It took almost a century to really solve the problem.

She was arguing that software and steam engines share similar relationships between economic usefulness, technological limitations, and safety, and she supposed that software might follow a similar trajectory as we improve engineering practices. For boilers, regulation was a part of it.

I think the comparison still holds up, over 30 years later.
[1] http://sunnyday.mit.edu/steam.pdf

@kenrb @blinkygal the thing is, unlike those boiler folks, we know quite well how to do this. However, it takes time and effort (and thus, money), so it's just not happening (enough) because incentives are elsewhere.

@vriesk @kenrb Yes! So it is a design choice being made, now, rather than unknown unknowns.

We have learnt a lot in 21 years since the comparative paper on steam engines was published, and attackers make use of new knowledge. Software vendors keep trying to find silver bullets to avoid changing their designs.

Jen Easterly has this right I think in her two-decade-later followup. Incentives have to change.

@blinkygal @kenrb design choice, yes, but as in "process design choice", as there many aspects involved, and software design being one of them, there's also a whole richness of technology choices, and then the choice of due processes as well.

And then there's the problem that the whole software engineering world depends on a huge amount of legacy software that really deserves to be rewritten/hardened. And who's gonna do that?

@vriesk @kenrb Thanks for the thoughtful comment, I've been mulling over what I think here.

Yeah it is a process design choice, with a ton of inertia into using tools that existing software was built with which then propagates the same security properties into new code as well.

And you're absolutely right, there's a whole world of legacy software, most of the large software we use day-to-day falls into that category. Some small things are being rewritten (https://www.memorysafety.org/). But there's vast amounts of code that is not.

Part of me says it doesn't matter for the purpose of the above; users are harmed, thousands of people lose control of their digital lives every year, some end up in jail or at risk in other ways for it, millions or billions are lost in ransoms. The point in some sense is just that this is happening, that vendors are aware of it, and users don't really know what's being given to them and that it was a choice.

Right now there's not the right level of acknowledgement of the problem, in my opinion, outside of CISA and the intelligence agencies. So of course nothing is moving to fix it in a hurry.

If the vendors who are writing on top of legacy software started investing into rewriting it into memory safe languages, or hardening it with tools within the same language, things would start to get better, maybe fast enough that regulation and liability laws wouldn't be needed to protect people.

Prossimo

ISRG's Prossimo is moving critical software to memory safe code.

Prossimo

@blinkygal @kenrb I agree, but I would like to stress out that memory safety is just one aspect of the general software safety/security; on the strictly technical level, there's also concurrency safety (both on the scale of threads and distributed systems), whole aspect of proper authentication and authorization management, and then the more general aspect of logic safety - making sure the tools used are used properly.

1/2

@blinkygal @kenrb As in, no memory and thread safe software stack will protect you from logic flaw that contains something like "if a person wearing a pink hat kindly asks for money, give them all we have".

Those type of issues can't be ruled out, but proper development process designs can make them less likely. This also costs time and effort, naturally.

2/2

@blinkygal @kenrb adding to that, there are technical choices to be made that either help or hamper writing correct (business logic-wise) software, and one of the reasons I consider type-unsafe languages like Python or JavaScript, and languages with poor expressivity like Go, to be poor choices for general purpose programming.
@vriesk @kenrb While that is true, the vast majority of security flaws, and the most useful ones, that I see in my work are memory safety bugs. Without tackling those there’s little hope for system integrity.