as I explain in my blog, the real problem is libraries which are large amalgamations of unrelated routines, such as libsystemd in the case of CVE-2024-3094.

a good solution is to split up these giant libraries into smaller ones, thus allowing for the dependency graphs of programs to remain leaner.

there is nothing about sd_notify() which requires LZMA compression. nothing. it is a function which writes a supplied string to a UNIX socket, the path of which is provided on an environmental variable.

@ariadne You're not that different from anyone else pointing at their pet explanation. You are right, partially. So are most others.

Library bloat is one of the causes of the attack, yes. So are most things people mention. That's what's fascinating about this attack: it worked because of a perfect combination of factors weakening the security of FOSS.

Individually, improving one of these factors will make this precise attack impossible. But the problem is bigger than one single thing, though library bloat is a big part of it.

Edit: I read your blog entry, and fully agree with it. I still think there are more factors involved, e.g. build system complexity, to name only one.

@ska @ariadne I disagree. This attack was not about bloat. It is about the way we develop open source. Components developed by maintainers and the maintainers are given powers to do stuff. A malicious maintainer is very hard to find and prevent.

@bagder @ariadne And you are also right. Trust is built in the FOSS development model. So a part of the puzzle is: how can we keep that development model, which has made FOSS successful, while screening for malicious maintainers more efficiently?

Part of the answer is more peer review, which means more people involved, which means, as always, more funding. But it's not the whole answer either.

@ska @ariadne there is no easy fix for this, but I believe the focus need to be in detecting anomalies after the fact rather than thinking we can get rid of malicious maintainers

@bagder @ariadne We obviously can never get rid of malicious maintainers, though the ratio is extremely low and the fact that this doesn't happen more often is to me a measure of *success* of the FOSS model. Even if there are two or three more similar attacks in the wild that have not been detected, that's really small compared to the amount of good faith maintainers.

When I say "screening for malicious maintainers", I really mean ensuring the correctness of contributions that make it in.

Detecting anomalies can be a part of the solution, but there's so much work to be done towards prevention as well. And when I think "anomaly detection", I think "reproducible builds", which are also a prevention tool.

(Edit: added a paragraph to clarify my meaning in the previous post.)

@ska @ariadne yes, it will help as in we need to tighten the screws all over the ship - but the xz attacker was only a tiny commit away from reproducible builds

@bagder @ariadne We're in agreement here. I'm not touting reproducible builds as the solution, just mentioning them since you were talking about anomaly detection.

Tightening the screws all over the ship is a perfect metaphor for what needs to be done - the attack exposed several weaknesses in the FOSS ecosystem and they all need to be addressed. I don't think it can be plausibly done without fueling more resources into the community, though.

@ska @ariadne "more resources" is most likely one of the screws, yes.