The supply chain attack on XZ Utils is fascinating. It does not appear to be a hack but rather an inside job. The malicious code has been added by someone who has been co-maintaining the project for the past two years. There is a considerable amount of (presumably) legitimate and non-trivial changes associated with that person. No public changes unrelated to xz however from what I can tell quickly.

Given the effort that went into hiding the backdoor, I’m fairly certain that it was supposed to operate undetected for a long time. It’s probably just luck that someone noticed the side-effects it caused, discovering it merely a month after it was planted.

I’m looking forward to a thorough analysis of the implant, hopefully it will allow conclusions about intentions. As things stand know, this could be a long-term operation by an APT, pushing their maintainer into a popular project which (like way too many open source projects) was constantly short on contributors. Obviously, monetary interests are also a possible explanation.

I see people arguing about this who clearly have no idea about the reality of open source projects. Enforcing code reviews, really? Most open source projects can consider themselves lucky if they have a single reliable contributor. Who is supposed to do these code reviews and where will they get the time?

With most open source projects, a single burst of useful contributions is all you need to be made a co-maintainer (talking from experience). Often enough you will even be offered to become the sole maintainer. The person behind the repository has no time, and they will happily delegate to whoever does.

I see someone suggesting that this backdoor has been built up piecewise over the course of a year. I did not verify, but this would make it a highly sophisticated and stealthy attack. Even with reviews, most open source projects would be unprepared to detect it. That one odd line in the build script standing out? It works, so nobody would bother to dig further.

The more important concern right now is: the same person has been driving xz releases since at least December 2022. It has to be verified that everything before xz 5.6.0 is really clean, otherwise this is very bad.

From the look of it, verifying that xz 5.4.6 for example can be trusted is going to be really tough. With version 5.6.0 or 5.6.1 we already know that the code in the repository and the code in the tarball isn’t identical. So why don’t we download the tarballs for the previous versions and compare them to the repository?

Well, because they generally aren’t identical, never been from what I can tell. They contain a bunch of files generated with autoconf and aclocal. So there is a whole lot of autogenerated code, some of which has been messed with.

As I see it, some code has been added to the configure file after the legitimate code for AM_GNU_GETTEXT. This code invokes build-to-host.m4, a trojanized version of a legitimate script.

There is no such modification in the files for version 5.4.6 for example, but there are still lots of autogenerated files – way more code than can be realistically reviewed manually (and trust me: you don’t want to review this code). So in order to exclude the possibility of other manipulations, someone will need to attempt to reproduce these files with all the right versions of the build tools. And I’m just happy that this someone isn’t going to be me.

I don’t know whether pre-compiling tarballs by running autoconf is common practice. I suspect that it is, given how messy it is to get all the necessary dependencies in place to do it yourself. I would suggest using a reasonable build system but… what can possibly be reasonable about a C codebase in year 2024?

Github took out the big hammer and disabled the entire xz repository and a bunch of others belonging to the project. I fail to see how this is going to help. People have been studying these repositories, looking for clues about what happened and whether we can still trust older versions. Now almost the entire history became inaccessible.

Also, I realized that my statement above about the malicious contributor driving releases since December 2022 is likely incorrect. The date displayed by Github isn’t when the release artifacts were uploaded, it’s rather the date of the release tag. According to Web Archive, xz releases have been moved from Sourceforge to Github somewhere between April 24 and May 6, 2023. This included some of the older releases as well.

The original xz maintainer started fixing the issues. And they fixed this gem which nobody apparently noticed so far: https://git.tukaani.org/?p=xz.git;a=commitdiff;h=328c52da8a2bbb81307644efdb58db2c422d9ba7

Under the pretense of a bugfix this introduced a subtle syntax error (the dot before the my_sandbox function). As a result, the check for landlock sandbox functionality always fails and this feature is consistently disabled. Quite ingenious, and one could always pretend this to be an overlooked typo – if it weren’t for the other changes.

One thing I’ve learned from the xz backdooring so far: the attacker clearly focused on build system and unit tests, most of their legitimate commits go into this direction as well. This is unlikely a coincidence, most developers treat these parts as less relevant than the “real” code. These changes will happily get delegated and receive less scrutiny. For complex build systems like autoconf this can be fatal.

First more detailed analysis of the backdoor AFAIK, in this Bluesky thread: https://bsky.app/profile/did:plc:x2nsupeeo52oznrmplwapppl/post/3kowjkx2njy2b

So the backdoor’s intention isn’t compromising SSH sessions but rather executing arbitrary code on vulnerable Linux servers. The payload is hidden within the RSA key sent to the SSH server during authentication. This payload has to be signed with some unknown Ed448 key which only the attackers possess. If the signature is deemed correct, the payload is passed to system() (executes it as a shell command). Otherwise the code falls back to the default SSH behavior.

Had this backdoor been discovered a few months later, the result would be lots of vulnerable servers all over the world. And only the attackers would be able to detect from outside which ones are vulnerable, because only they can send a correctly signed payload that would trigger command execution.

Planting a command execution backdoor into most Linux servers out there sounds too ambitious for someone driven by monetary interests, there are simpler ways to build a botnet. The level of sophistication and long-term planning indicates a state-level actor. Unfortunately, there isn’t a shortage of candidates. With quite a few Western governments pushing for lawful interception lately, I wouldn’t rule out any country at this point.

Filippo Valsorda (@filippo.abyssdomain.expert)

I'm watching some folks reverse engineer the xz backdoor, sharing some *preliminary* analysis with permission. The hooked RSA_public_decrypt verifies a signature on the server's host key by a fixed Ed448 key, and then passes a payload to system(). It's RCE, not auth bypass, and gated/unreplayable. [contains quote post or other embedded content]

Bluesky Social

Also worth noting: why was OpenSSH chosen as the target here? Some people blame systemd support, saying that distributors made OpenSSH vulnerable due to added dependencies. While this probably made the attackers’ job somewhat easier, I doubt that they would have given up without this dependency.

They also didn’t actually care that it is OpenSSH. They merely needed a network-connected vehicle for code execution. It certainly came in handy that OpenSSH runs as root and that it is installed pretty much on any Linux server.

The other obvious vehicle for this kind of attack would have been web server software. The only difference: with nginx and Apache there are two big players in this field, and the attackers would have to cover both. But there are plenty of dependencies here that could be abused.

Which means: nginx and Apache dependencies (especially the transitive ones and especially those used by both) should probably be checked for signs of suspicious activities. OpenSSL is the obvious target and has received significant scrutiny in the years since Heartbleed. But I wonder what else is there that nobody notices.

#xz #xzbackdoor #xzutils #openssh #nginx #apache

I’ve originally dismissed timezone analysis (see https://rheaeve.substack.com/p/xz-backdoor-times-damned-times-and) as too likely to be a fake. It looks like I might have been wrong about that.

It isn’t just that the commit times match core working hours of 9 am to 6 pm in EET/EEST. There is a number of commits actually showing that time zone instead of the fake UTC+08, where the threat actor presumably forgot to change the timezone. There is also a mailing list reply where they quote the time of original mail, again implying EEST as their local time zone.

What’s more, they consistently don’t work on and around Catholic/Protestant Christmas (December 25th). There are on the other hand commits during the Chinese New Year and Orthodox Christmas.

None of this is hard evidence, it could all be an elaborate decoy. But there are several independent sources corroborating the hypothesis.

#xz #xzbackdoor #xzutils

XZ Backdoor: Times, damned times, and scams

Some timezone observations on the recently discovered backdoor hidden in an xz tarball.

Rhea's Substack
@WPalant
Another way to approach this is by asking a different question. Whether rsa_public_decrypt function gets invoked only when rsa keys are used. If that's the case, sensitive target systems running in .ru and .cn are automatically eliminated because one uses gost and another sm cipher suites. As I understand, they don't compile any rsa code in.

@cek That would be big if true. I’ve looked into it for Russia but could not confirm.

Judging from Astra Linux documentation, the OpenSSH variant there definitely supports GOST R 34.13–2015 and GOST R 34.11-2012. These however are block cipher and encryption authorization respectively, replacing AES and HMAC for example.

The authentication phase still appears to support RSA at the very least. libgost-astra does contain support for GOST R 34.10-2012 which would allow session key negotiation. However, my impression so far is that it is only being used for SSL connections (tunnels in particular).

Do you have any pointers?

@WPalant I have none as looking for answers brings up more questions.