Folks who want to see JPEG-XL supported in more browsers, what is it about the format that attracts you to its use on the web compared to currently supported formats?

@jaffathecake I think for the web, the two key features of jxl compared to other formats are:

1. Reliable and very effective high-fidelity lossy compression, in particular for HDR images which I hope will get traction on the web in the near future;

2. Lossless JPEG recompression: a no-brainer to improve delivery of existing legacy images for which only already-compressed versions exist.

There are other nice things like progressive decoding, future-proof format design, across-the-workflow, etc

@jonsneyers the replies from web developers seem to put progressive decoding as a much higher priority. Are they wrong?

@jaffathecake I love progressive rendering, but the reality is that situations in which it makes user-visible difference are getting increasingly rare.

Whenever loading of a 200KB image takes seconds, you're probably already mad at the 5MB JS bundle that blocks it.

@kornel I agree for progression throughout the file, but I feel there's still benefit in getting a representative render in the first couple of k of a file

@jaffathecake it's harder than it seems.

Browsers throttle re-rendering of pages during loading. Lots of things can block and delay render.

TLS makes data arrive in blocks, often 16KB (configurable for those who know how, but adds overhead).

Congestion and bufferbloat make data arrive in laggy bursts rather than slowly. Very bad signal strength also tends to be on/off. You may need H/2 pri tricks and large images to even have partial data to render.

@jaffathecake Progressive is useful server-side when making thumbnails, but also tricky.

Downloading less data is hard due to latency of cancellation. Server-side connections are way faster, and all images seem insignificant compared to Docker images.

Partial progressive render doesn't get the same sharpness and gamma as proper image resizing from full-res, so you need to overshoot progressive res to be safe, which lessens the savings.

Lots of image proc libraries don't support it anyway.

@jaffathecake and for image-heavy desktop apps like galleries, the game has completely changed due to GPUs, and users expecting to be able to zoom out and see 10 years of their photo roll thumbnails at once, instantly. This needs lots of preprocessing, GPU-specific compression. Input format almost doesn't matter, because it won't be read in real time directly.
@kornel @jaffathecake for web cases, preloading the thumb and lazyloading the rest would have some utility. I wish it made more sense for responsive loading (which gets a lot easier when you wait for layout) but alas, all partial loads (even with JXL) are a lot bigger than they need to be because most of the loss is in higher frequencies.
@kornel @jaffathecake Do you know anyone with use cases like this doing or interested in doing GPU compression?
@castano @jaffathecake Do you mean on web or native? For assets from disk or network?

@kornel I'm rereading your statement and I guess what you mean is that applications with lots of thumbnails are targeting GPU formats directly, which is true in some cases.

However, I think there's an opportunity to deliver thumbnails in a more compact image format and transcode them to a GPU format on the device. I wonder if there's anyone doing this, or interested in doing so.

@castano Yeah, both strategies are commonly used. It depends whether you prioritize saving bandwith above everything else, or want to avoid extra work on startup/install. Basis Universal is notable for being somewhere in the middle.