Folks who want to see JPEG-XL supported in more browsers, what is it about the format that attracts you to its use on the web compared to currently supported formats?
@jaffathecake have u seen webp?

@arichtman avif is pretty great, isn’t it?

@jaffathecake

@urlyman @arichtman @jaffathecake No, #AVIF is not progressively decoded, losslessly re-encodable with #JPEG, nor supporting splices like #JPEG_XL - see #svg2jxl: https://www.reddit.com/r/jpegxl/comments/u8kse2/svg_to_jpeg_xl/
@niutech @urlyman @arichtman how do you feel about JXL leadership pushing back against implementing progressive decoding despite it being the first benefit you mentioned? https://github.com/web-platform-tests/interop/issues/994#issuecomment-3376722739
JPEG XL image format · Issue #994 · web-platform-tests/interop

Description https://web-platform-dx.github.io/web-features-explorer/features/jpegxl/: The JPEG XL image format is a raster graphics file format that supports animation, alpha transparency, and loss...

GitHub
@jaffathecake @urlyman @arichtman Are they really pushing back progressive decoding? What I read in the comment you linked is "While progressive rendering is a nice-to-have feature, it is not the critical reason for JPEG XL's success. (...) We should not make progressive rendering tests a blocker for JPEG XL's inclusion". It's not opposing it. JPEG XL has more benefits than progressive decoding, so it should finally be included in #interop26.
@niutech @urlyman @arichtman if it's included, I think it should include the feature that you and many many others mention as it's #1 benefit - progressive decoding
@jaffathecake @urlyman @arichtman Yes, and jxl-oxide supports progressive decoding, doesn't it?
@niutech @urlyman @arichtman it does! It needs to be a lot faster before it's ready for the web though

@jaffathecake

  • Perfectly reversible, lossless conversion from JPEG with ~20% savings.
  • ~60% smaller file sizes than JPEG at the same quality when using lossy compression.
  • Negligible compression artefacts when using lossy compression.
  • Great colour gamut support, including HDR, and support for other channels. ‪- Super fast encoding and decoding.
  • Support for progressive decoding.
  • Tiny file header at a mere 12 bytes.
  • Everything WebP and AVIF boasts, including transparency, animations, and such.
  • Resilient against generational loss.
@vale if you were to pick one or two things that make it worth having vs AVIF, what would those be?

@jaffathecake Really hard to pick!

The lossless conversion from JPEG is huge, as it more or less makes it a better JPEG. Progressive decoding is also fantastic.

@jaffathecake progressive rendering! For use in above-the-fold images
On Container Queries, Responsive Images, and JPEG-XL

With the news that CSS Container Queries have shipped in nearly all stable, modern browsers, it’s time to revisit responsive images and ask how they fit in a container query world. Are we on the right path?

Cloud Four
@jaffathecake ah, fans of imgset tag who provide same picture in 5 formats and 20 sizes?

@jaffathecake I can't wait for another format that is very widely supported on the web while somehow still occasionally ignored by native applications like webp

(I'm being sarcastic)

@jaffathecake I've been exposed to quite a few questions/opinions about JPEG-XL over the last couple of years. From what I've seen and heard mentioned by others, its lossless compression is best-in-class for both file size and encode/decode times. Its lossy encoding speed is also brilliant compared with the CPU-intensive AVIF/AV1 but I realise that's less of a concern to web browsers. There's some discussion at https://github.com/lovell/sharp/issues/2731 if you haven't already seen it.
Enhancement: add experimental support for the JPEG-XL format, requires libvips compiled with support for libjxl, prebuilt binaries will not support this · Issue #2731 · lovell/sharp

What are you trying to achieve? -> smaller images Have you searched for similar feature requests? -> Yes, see also #2245 What would you expect the API to look like? -> Same as other formats. What a...

GitHub

@lovell @jaffathecake

> Its lossy encoding speed is also brilliant compared with the CPU-intensive AVIF/AV1

This is pretty important to Next.js since images are optimized on demand when they are requested (based on the Accept header).

We noticed a lot of users don't like how long it takes to convert to AVIF (compared to WebP) which is why AVIF is still opt in but WebP is default.

@lovell fwiw, I just did some testing, and JPEG XL takes 2.5x to decode compared to an equivalent AVIF.
@jaffathecake Ooh, was this with rust-based decoders or the C++ "reference" libraries or perhaps something else? Do you have example images that led to this figure? It'd be great to get some fair and up-to-date benchmark comparisons of lossy/lossless encoding/decoding. Thank you!
@lovell I tested the Safari shipped implementation, but also the behind-a-flag Firefox implementation, and the old Chromium behind-a-flag one. The test: https://random-stuff.jakearchibald.com/apps/img-decode-bench/
Image decoder benchmark

@jaffathecake Great, thank you, I'll take a closer look at those later. My understanding is that most ARM-based Apple devices released in the last 2-3 years support hardware decoding of AV1 whereas JXL decoding will be software-only (and will remain so for a while I suspect).
@lovell I don't think hardware is used for AVIF decoding, but I could be wrong
@jaffathecake As of August 2024 WebKit/Safari delegates AVIF decoding to the underlying Image I/O Framework but I'm unsure for which processors this might be hardware accelerated yet, if any. https://bugs.webkit.org/show_bug.cgi?id=277578
277578 – Remove the libavif project

WebKit Bugzilla
@lovell @jaffathecake I don't know if Apple is using AV1 hardware to decode AVIFs but I do think Apple is doing JXL decoding on a single core, which is leaving some perf on the table: https://www.reddit.com/r/jpegxl/comments/1djcccj/comment/l9at8p3/

@eeeps @jaffathecake Yes, there's no sign of libjxl's multi-threaded API in WebKit https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/image-decoders/jpegxl/JPEGXLImageDecoder.cpp

Other performance-related factors here might be that the current WebKit logic first fully decodes a JPEG-XL image into memory then copies the entire thing pixel-by-pixel into another bit of memory then, because of the embedded ICC profile, hands over to lcms to convert pixel values.

WebKit/Source/WebCore/platform/image-decoders/jpegxl/JPEGXLImageDecoder.cpp at main · WebKit/WebKit

Home of the WebKit project, the browser engine used by Safari, Mail, App Store and many other applications on macOS, iOS and Linux. - WebKit/WebKit

GitHub
@jaffathecake An infrequently mentioned feature of JPEG-XL is that it can have multiple image layers with different compression. AKA, it is the perfect image format for text-over-photo memes.

@AmeliaBR @jaffathecake Does this have current authoring tool support?

(I use JPEG XL off the Web myself, so I’m not generally a JPEG XL skeptic, but I’m curious which things are in the category “I have seen this in action off the Web and want to have it on the Web” and which things are in the category “I have not witnessed this, yet, but I want it from a feature list”.)

@hsivonen That's a good question. I'd assume it would be a key feature of exporting from editing software like Photoshop which has an integral layer model, so that you could re-import & separate it back out into editable layers. But I can't find any details in their support docs (beyond the fact that JPEG XL export was fairly recently added at all) & I don't have the software to check! @jaffathecake
@AmeliaBR @jaffathecake I tested the latest non-beta Photoshop (26.11.0) just now. Of the various commands that write files, JPEG XL is available only under Save a Copy… (mentioning this in case someone else wants to try; it took me a moment to find it). There the Layers checkbox is disabled for JPEG XL when the picture to be saved has a text layer. (The checkbox is enabled for TIFF.) Saving to JPEG XL flattens the layers. Could change in the future, but this is what it’s like now.
@AmeliaBR @jaffathecake Or at least flattens the layers in the sense that reopening in Photoshop itself does not show layers.

@jaffathecake a lot of what other folks said. The compression and the performance are the feature I’m interested in. Also, while I guess you could get this with AVIF, too, being able to use the same format from camera to browser would make photo workflows a lot easier.

I don’t think the question should be AVIF OR JPEG-XL, though. I feel like having good support for both would benefit the web as upgrades over the current formats.

@jaffathecake more native support for formats used elsewhere means fewer build steps necessary to ship websites.
@ardouglass what one or two things do you want JXL for that isn't available in AVIF?
@jaffathecake for me it’s mostly about the ability to put a file on the web without having to convert it. But the improved file size is nice too, especially on image heavy sites.
@jaffathecake JPEG XL covers almost everything missing from currently supported formats (progressive decoding, wide colour gamut, transparency, >8k resolution) which makes it a 'definitive' image format for the web, at least for the next 5-10 years.

By far, I would be most excited if browsers could handle progressive decoding on one image file rather than websites having to handle multiple file sizes.

I also like that it has lossless conversion with the existing JPEG format.

@jaffathecake I think for the web, the two key features of jxl compared to other formats are:

1. Reliable and very effective high-fidelity lossy compression, in particular for HDR images which I hope will get traction on the web in the near future;

2. Lossless JPEG recompression: a no-brainer to improve delivery of existing legacy images for which only already-compressed versions exist.

There are other nice things like progressive decoding, future-proof format design, across-the-workflow, etc

@jonsneyers the replies from web developers seem to put progressive decoding as a much higher priority. Are they wrong?

@jaffathecake I love progressive rendering, but the reality is that situations in which it makes user-visible difference are getting increasingly rare.

Whenever loading of a 200KB image takes seconds, you're probably already mad at the 5MB JS bundle that blocks it.

@kornel I agree for progression throughout the file, but I feel there's still benefit in getting a representative render in the first couple of k of a file

@jaffathecake it's harder than it seems.

Browsers throttle re-rendering of pages during loading. Lots of things can block and delay render.

TLS makes data arrive in blocks, often 16KB (configurable for those who know how, but adds overhead).

Congestion and bufferbloat make data arrive in laggy bursts rather than slowly. Very bad signal strength also tends to be on/off. You may need H/2 pri tricks and large images to even have partial data to render.

@jaffathecake Progressive is useful server-side when making thumbnails, but also tricky.

Downloading less data is hard due to latency of cancellation. Server-side connections are way faster, and all images seem insignificant compared to Docker images.

Partial progressive render doesn't get the same sharpness and gamma as proper image resizing from full-res, so you need to overshoot progressive res to be safe, which lessens the savings.

Lots of image proc libraries don't support it anyway.

@jaffathecake and for image-heavy desktop apps like galleries, the game has completely changed due to GPUs, and users expecting to be able to zoom out and see 10 years of their photo roll thumbnails at once, instantly. This needs lots of preprocessing, GPU-specific compression. Input format almost doesn't matter, because it won't be read in real time directly.
@kornel @jaffathecake for web cases, preloading the thumb and lazyloading the rest would have some utility. I wish it made more sense for responsive loading (which gets a lot easier when you wait for layout) but alas, all partial loads (even with JXL) are a lot bigger than they need to be because most of the loss is in higher frequencies.
@kornel @jaffathecake Do you know anyone with use cases like this doing or interested in doing GPU compression?
@castano @jaffathecake Do you mean on web or native? For assets from disk or network?

@kornel I'm rereading your statement and I guess what you mean is that applications with lots of thumbnails are targeting GPU formats directly, which is true in some cases.

However, I think there's an opportunity to deliver thumbnails in a more compact image format and transcode them to a GPU format on the device. I wonder if there's anyone doing this, or interested in doing so.

@castano Yeah, both strategies are commonly used. It depends whether you prioritize saving bandwith above everything else, or want to avoid extra work on startup/install. Basis Universal is notable for being somewhere in the middle.

@jaffathecake we've seen a gallery of people's handcrafted ones that make very creative use of its features

we'd like to see the web have more room for creativity, in general

the lossless conversion from older formats is also really nice

@jaffathecake progressive decoding by far (I absolutely despise the blurry image crap and hope that this could replace it). But also the lossless recompression of existing JPEGs and wide gamut.

@jaffathecake

I use JPEG-XL mainly for lossless like small logos and images which don't work well with lossy, or need to look particularly good. JPEG-XL is noticeably smaller than WebP for this, which is itself better than AVIF (which has quite poor lossless). It's also good for high-fidelty lossy as well, especially for images. The video-derived AVIF and WebP are incredible (and better) for low-bitrate compression, but not so great for less compressed images.

I also like the progressive decoding a lot, and it's much faster to encode than AVIF.

@FormularSumo fwiw, I've been using lossy AVIF in places I used to use lossless https://jakearchibald.com/2020/avif-has-landed/#why-not-lossy
AVIF has landed

AVIF is the first browser image format we've had in 10 years. Let's see how it performs…

@jaffathecake Fair enough, I remember reading this a few years ago! I'll have another try with tuning lossy AVIF, but last I tried it didn't perform as well as lossless JPEG-XL or even WebP for some use cases, mainly when the image dimensions are particularly small. This is usually things like logos and icons, but I could also see being quite useful for some game assets.
@jaffathecake Besides what others already said here (recompression of existing JPEGs, fast encoding speeds, better quality and compression than everything else out there), one particularly unique feature of #JPEGXL as far as I'm aware of is the ability to encode images for progressive decoding using saliency, i.e. specific parts of an image first: https://opensource.googleblog.com/2021/09/using-saliency-in-progressive-jpeg-xl-images.html
Using Saliency in progressive JPEG XL images

At Google, we are working towards improving the web experience for users. Getting images delivered fast is a crucial part of the web experience and

Google Open Source Blog
@jaffathecake Unlike aspects of WebP & AVIF, JPEG XL is incredibly well thought out, and can be used in every stage of the image pipeline – RAW camera data through optimized web delivery.