I've been reverse engineering and benchmarking Apple's, ARM's, and ImgTec's hardware image compression formats. ARM's AFRC is the clear winner, but does native hardware compression make real-time texture encoding obsolete?

https://www.ludicon.com/castano/blog/2026/03/hardware-image-compression/

Hardware Image Compression

One of the things I've always lamented about hardware image formats is the slow pace of innovation. Developers were usually unwilling to ship textures in a new format unless that format was widely available. That is, the format had to be supported in the majority of the hardware they were targeting,

Ignacio CastaƱo
@castano whoa, how do you even begin to reverse engineer these formats?

@aras @castano My understanding is that these are fixed rate and you provide the memory for the underlying data, so unlike something like DCC if you read the underlying bytes via a buffer object you get to see the compressed representation and can experimentally observe the encoding of arbitrary RGBA data.

(great post, I enjoyed it!)

@zeux @castano ah, right indeed

@aras @zeux It's a little bit more complex than that. At first I thought it could not be done! The APIs abstract access to the texture data and encodes and decodes it on blits/copies.

The trick is using a buffer that alias the same memory. In metal that is now possible by using heaps.

Once you have this machinery you can generate blocks and look at how they are encoded, or use bit-injection to see how each bit in the encoded block changes the output.