HEIC & its consequences have been a disaster for the human race

@gianni

Because it's patent-encumbered?

@RL_Dane
Among other things, yes.

@gianni

I'd be curious to know the other things 😁

I've started using WebP where possible.

I wish my phone would snap pictures in that format directly.

JPEG is starting to get AARP flyers in the mail. 😂

@RL_Dane
WebP is often worse than JPEG, especially for photographic images. It is better at low fidelity, about even at medium, & worse at high fidelity. It is decent with non-photographic images, though. This surprised me initially, as it goes against the whole purpose of WebP.

Lossless WebP, on the other hand, is fantastic. There hasn't been a better lossless format until lossless JXL.

@gianni

That's really fascinating, thanks for sharing. I've been fascinated by codecs for many, many years (though I'm not terribly well versed on the subject), initially general-purpose ones like stuffit/arc/arj/zip/compactor pro/compress/gzip/bzip2, etc, but later JPEG fascinated me with its magical (for the time) powers.

I'm surprised that JPEG XL was still using DCT. I thought all the cool (video codec) kids had moved on to wavelets.

@RL_Dane You'd have to ask Jon about that. I know JPEG2000 used the DWT, but VarDCT is unique to JXL. I'll ping him here in case he's around
@wb
@gianni @RL_Dane Basically all modern codecs (h264, h265, av1, jxl,...) still use the DCT in one way or another. JPEG 2000 is an outlier. The DCT is still the best idea ever for lossy image compression. JXL extends JPEG by having multiple block sizes (not just 8x8 but also e.g. 16x32 and 8x4) and has many other improvements, generalizations and extensions, but the DCT is still one of the most important core coding tools.
@gianni @RL_Dane
VarDCT, or rather the general idea of variable block sizes, is not unique to JXL: most modern video codecs also have something like that. The way JXL does it is more flexible than in video codecs though (more block types and positioning options, fancier entropy coding of the block selection itself), and it is also the first codec to combine variable blocks with progressive decoding (which is not trivial).

@wb @gianni

Fascinating, thank you for taking the time to answer.

So I'm guessing the codec analyses the image and decides on the unique mosaic of block sizes, or is it just one block size per image?

If almost everything is still DCT, why don't newer algorithms have the kind of chunky artifacts as old JPEG? They just seem to get blurrier, not as artifact-y.

@RL_Dane @gianni
JPEG treats 8x8 blocks independently, causing blockiness. More modern codecs apply deblocking filters after inverse DCT. Essentially these are doing some kind of (selective) blur that gets rid of the block edges. Video codecs tend to do it quite aggressively to keep things smooth even at very low bitrates, which can lead to loss of detail and texture.

@wb @gianni

Interesting! I didn't realize the modern codecs were just "covering up" the "edges," so to speak.

Can you elaborate a little bit about what you mean by JPEG treating the blocks "independently?"

Are the newer codecs applying some kind of averaging across blocks, or making the blocks overlap or something?

@RL_Dane @gianni
In modern codecs, you have deblocking filters, sometimes overlapping transforms, and directional prediction which cause dependencies between blocks. One issue with that is that it causes generation loss (accumulated artifacts after repeated lossy recompression) to spread further. In JPEG, "what happens inside the 8x8 block stays within the 8x8 block" (only exception: chroma subsampling/upsampling).
See also: https://youtu.be/FtSWpw7zNkI
Generation Loss: JPEG, WebP, JPEG XL, AVIF

YouTube

@wb

It's fascinating that good old JPEG had the least fading after repeated codec cycles.

I would not have predicted that. 😅

@gianni

@RL_Dane
Good ol' JPEG is pretty ol' but definitely still also pretty good :)
@wb