I’m super excited about this blogpost. The approach is so counterintuitive, and yet the results are so much better than anything else that we’ve tried for memory safety. We finally understand why.

https://security.googleblog.com/2024/09/eliminating-memory-safety-vulnerabilities-Android.html

Eliminating Memory Safety Vulnerabilities at the Source

Posted by Jeff Vander Stoep - Android team, and Alex Rebert - Security Foundations Memory safety vulnerabilities remain a pervasive threa...

Google Online Security Blog
@jeffvanderstoep Yes, concentrating on making the new code safer makes ton of sense. The only part that I find dubious is that bug in old code decay exponentially over time. I remember when new issues like integer overflow became mainstream, leading to tons of new bugs discovered in very old code. We are always at risk of the next attack method preying on unaware old code.

@jeffvanderstoep The part I think is most interesting but also often never considered

"The Android team has observed that the rollback rate of Rust changes is less than half that of C++."

Those aren't observations people will normally admit to about their development processes

@jeffvanderstoep Just realized I can write directly to an author :)

I have some doubts about this "half-life" metric, but maybe I don't get the full picture:

https://infosec.place/notice/AmO2BAowHxngnThcjg
buherator (@[email protected])

About vulnerability "half-life": I still have to dig into the works referenced by the recent Google post, but the data is obviously based on known vulns. https://security.googleblog.com/2024/09/...

@buherator

Why? It's consistent across all projects that the cited "large scale" study analyzed. It's also consistent when we looked at Android, which was not part of the study. When we change the behavior of development within Android, the result matched what we would expect based on the half-life metric.

When you look at studies that analyze how this works from the opposite angle "how much does it cost to find the next vulnerability in the same codebase?" you'll see a similar result. E.g. "On the Exponential Cost of Vulnerability Discovery" https://mboehme.github.io/paper/FSE20.EmpiricalLaw.pdf

There's a finite number of vulnerabilities within a code base. As the density drops, the cost of finding the next vulnerability will rise.

@jeffvanderstoep Thanks for your reply! I don’t doubt the validity of your measurement. I’d argue about two things:

  • The simpler thing is communication: the phrase “half-life” or “decay” implies that vulns disappear without explicit dev intervention, e.g. as a side-effect of unrelated code changes (or even the passage of time!). While this may be true in some cases I don’t see how the data would (or could) support such an observation.
  • My understanding is that when we look at overall results of different vuln discovery strategies (your study) or applying the same strategy with “more force” (Böhme-Falk) we basically see the effects of testing coverage, and it’s no surprise we can grow coverage faster in new code. What I think would be more revealing is looking at new vulns(/LoC?) vs code age when a new discovery method (e.g. a new sanitizer or more intelligent test-case generation ) is introduced. FTR: I bet such data would actually confirm your results, but without data about the effect of new discovery methods I think drawing conclusions about code “maturity” is much harder.
Ring Around The Regex: Lessons learned from fuzzing regex libraries (Part 2)

I’m a little late (one whole month passed in a blink of an eye!). Let’s catch up.

secret club

That’s a very good result. Good points about the half life of vulnerabilities and that new vulns are more likely to be in new code.

@jeffvanderstoep

@jeffvanderstoep it is only counter intuitive if we follow the usual SDLC model (which is wrong).

If we follow a model that code is never done and need constant rework and tweaks, as a dynamic system, then it is totally intuitive.

This is one more support to ditch the SDLC and all we built around it.