my only contribution to the xz discourse:

absolutely none of the supply chain stuff we're currently doing, including the things i like, would have stopped this. the only things that can stop this are (1) compulsively treating all code as untrusted, and (2) way, way stronger capability checks and restrictions in running systems. (1) is economically infeasible (the world runs on free labor from OSS), and (2) has had only very limited practical success.

@yossarian

(3) reinstate a web of trust; only trust GPG signatures that are well connected in this web of trust through meaningful keysigning

@nik @yossarian no, that will get no uptake, and even if it did, someone would have signed the key of the guy doing xz maintenance for two years to keep things moving

@nik the idea that a PGP signature would have stopped this is, bluntly, unserious. the person in question *was* the legitimate maintainer; there was no a priori reason to distrust them.

(this is before all of the normal observations about the PGP WoT being defunct anyways)

@yossarian My point is, they should not have become the trusted maintainer without a well-trusted key, and a Debian (or other distribution) maintainer should not have imported the tarball without a trust path to them.

@nik ā€œshouldā€ is doing a lot of lifting there, and is hoisted on technologies that empirically have *not* done the job well enough. there’s no reason to believe the previous maintainer wouldn’t have shared their key, that the malicious maintainer wouldn’t have been in the strong set, and so forth.

I’m a strong advocate of code signing, and trust distribution is one of the hardest parts; there is no reason to believe that PGP’s primitives would have sufficed here.

@yossarian @nik Agreed, the web of trust would not have helped here. We don't know yet, but it's entirely possible that this was a fully trusted and well meaning community member who was somehow compromised by a malicious actor.
The alternative is that this was somebody patient enough to build up trust over time, hiding malicious intent all along. That person would still, likely, have been able to bypass web of trust protection.
Arguably we need to go the other way: trust nothing; assume malice.
@noahm @yossarian @nik My immediate thought on the "Web of trust" is thinking to, like background checks and security clearance processes; asking every open source developer who is tired of maintaining their codebase to spontaneously run those on people they trust before they hand over the maintenance rights to the other person... probably isn't scalable, even outside of OSS funding through donations.
@nik @yossarian what's the scenario where they spend ~2 years maintaining the project without malicious changes and still don't have their keys signed as a xz maintainer?
@yossarian eh I don’t completely agree, stuff like SLSA should make it more obvious when prebuilt packages diverge from the canonical source

@segiddins sure, but this was the canonical source! the decision to distribute the tweaked autoconf as a separate package wasn’t the separating factor here IMO; provenance would have only changed things if the backdoor had been inserted at the index or redistribution layer

(I still think we should do things like provenance, but I think this is a great demonstration of their limitations)

@yossarian sure, but IMO it does place a higher creativity burden on attackers
@yossarian @segiddins There's a subtlety here I think is worth highlighting. AIUI it was the canonical source in the sense that it is what the trusted individual released, but it differed from what it claimed to be in a very detectable way. Specifically, it did not match the tarball generated from the canonical repository. The change hid a small payload needed to activate the "public in git history but obscured" exploit material. We can and should close this part of the path.
@iansmcleod @yossarian yes, that’s what I was getting at but didn’t explain as well

@segiddins @yossarian

I'm sceptical if SLSA actually properly address this? Reproducible builds was removed from the earliest drafts and we are dealing with multiple build processes which forms the distributed artifacts.

If you are defining that `git archive` should match the distributed+signed tarballs I have bad news regarding NPM/pypi.

@yossarian i think that there is room for more people to be involved the distribution process for foss in general. imo, if the project had not been allowed to shrink to a single maintainer, or if there were another community capable of watching it, i think that could have done a lot to help prevent this type of supply chain attack.

but, that does rely on "free labor from OSS", which we still need to address in a more meaningful way. i'm all for getting more people involved and paying them.

@elmiko @yossarian yeah, there's plenty of "room", just very few people willing to occupy it.
@womble @yossarian too true, i'm curious how we change that. is it just more money or is there something else as well?

@yossarian

What about using sources from version control instead of from released tarballs?

@robryk single source of releases is a good practice IMO, but isn’t a generalized solution here: the maintainer could just as well have pushed the autoconf changes to a tag on version control.
@yossarian @robryk Agree, if attacker had been forced to commit the changes, would have been incrementally higher risk of discovery for them and better traceability for us after the fact, but definitely not a "solution" in any sense. They committed parts that went unnoticed, like https://mastodon.social/@WPalant@infosec.exchange/112184986611495654

@yossarian

Sure, it's not a general solution to the "malicious committer" problem, but it _is_ a solution to _this_ attack. (Obviously, if we were doing that, the attacker would choose a different attack, though potentially risking a larger chance of discovery.)

@yossarian Agreed, being able to trust people is critical for OSS. The fact that this individual was contributing for multiple years is very concerning. I'm worried that this incident will hurt the trust we need to have in folks who are contributing in good-faith.
@yossarian at least sbom/better inventories make reactions a bit faster
@yossarian From my admittedly limited understanding of how the backdoor is implemented, I think the real problem is that a malicious library can mess with the execution of functions defined and implemented outside it. I think we need a new standard for how symbol resolution/linking/loading works to have a defense better than "recursively check your upstream for backdoors." #xz

@yossarian the only supply chain security thing that I've seen that looks like it might provide any security value is SBOM to at least attempt to document components so that if one of them gets hacked like this you know which parts of your software are actually using that. The problem is that there's no real way to enforce the correctness of the SBOM, so it only helps you if a tiny inner bit is broken like this and not the overall combination, and the combiner makes a genuine attempt to make it complete.

Too many of these efforts still end up running the attackers code and thus don't actually do anything. Wishful thinking, fumbling around in the dark.

I'm still waiting for somebody to show a threat model for any of these efforts and how they mitigate any of the threats therein.

@yossarian And don't forget to treat the compiler etc. as part of the supply chain: https://blog.acolyer.org/2016/09/09/reflections-on-trusting-trust/
Reflections on trusting trust | the morning paper

@yossarian haven't they changed the list of exported symbols? Stable distros could've caught that.
@yossarian Yeah I'm thinking about this too: how pretty much none of the supply chain security work I've worked on or seen getting a lot of traction would have mitigated this threat. And how https://www.harihareswara.net/posts/2024/trust-new-maintainer/ has a "bare minimum" suggested list of checks that probably would not have caught the bad actor.

@yossarian how about: for each piece of software, have multiple independent (as in, unlikely to collude) reviewers reading every diff, and flagging anything they don't understand as a potential backdoor?

It's a lite version of (1), since instead of everyone reviewing everything (N*M) we only need a few reviewers for each project (k*M, k < 10).

It'd make every update require a lot more work, but not as much as your (1)

@wolf480pl show me a protocol that establishes that many independent trusted identities, and i'll show you a protocol that's ~impossible to deploy for nontrivial numbers of people in the real world :-)
@yossarian That's not entirely true. The anti-bullying efforts that many, although not yet enough, free software projects have been deploying would have had a chance of mitigating the fake social pressure to admit malevolent code parts of the attack.
@yossarian @encthenet
How about (3) create social systems surrounding OSS that don’t put a single person in a position of absolute and unsupervised trust like this. To me this is a ā€œsingle point of failureā€ story, but about •human• dependencies, not code dependencies.