• caglararli@hotmail.com
  • 05386281520

Why did the xz-tools attacker put so much effort into hiding the malware when they could manipulate the tarball?

Çağlar Arlı      -    10 Views

Why did the xz-tools attacker put so much effort into hiding the malware when they could manipulate the tarball?

With all the discussion about the xz-tools supply chain attack on the Linux distros, what confuses me:

As stated here or on the infographic here, the attackers worked their way to becoming trusted maintainers of the project. They used this to alter the source code by including some manipulated binary "test files".

However, they also had to tamper with the build process of the package. This they achieved by changing the source code inside the released tarball. These changes were never made in the source code hosted on GitHub, thus hiding them.

I find two things puzzling here:

  • This means the code inside the tarballs differed from the repository code, which seems to be not unusual as confirmed here:

    The release tarballs upstream publishes don't have the same code that GitHub has. This is common in C projects so that downstream consumers don't need to remember how to run autotools and autoconf. The version of build-to-host.m4 in the release tarballs differs wildly from the upstream on GitHub.

    How can this be "common", especially for potentially system critical C projects? Doesn't this essentially defeat the whole "security by transparency" effect that Open Source software supposedly has?

  • If all it needs is to have full access to a "trusted" upstream repository to freely manipulate the released tarballs that will find their way into the Linux distros, why go through all the extra effort described in the above articles of multiple stage loaders etc.?