jpeg xl is fantastic, yet autocratic google wants to force inferior format

Mozilla also isn't interested in supporting it, it's not just Google. I also often see these articles that tout jpeg-xl's technical advantages, but in my subjective testing with image sizes you would typically see on the web, avif wins every single time. It not only produces fewer artifacts on medium-to-heavily compressed images, but they're also less annoying: minor detail loss and smoothing compared to jpeg-xl's blocking and ringing (in addition to detail loss; basically the same types of artifacts as with the old jpeg).

Maybe there's a reason they're not bothering with supporting xl besides misplaced priorities or laziness.

> Mozilla also isn't interested in supporting it

Mozilla is more than willing to adopt it. They just won't adopt the C++ implementation. They've already put into writing that they're considering adopting it when the rust implementation is production ready.

https://github.com/mozilla/standards-positions/pull/1064

There's way more than one rust implementation around

- https://github.com/libjxl/jxl-rs

- https://github.com/tirr-c/jxl-oxide

- https://github.com/etemesi254/zune-image

Etc. You can wait for 20 or so years "just to be sure" or start doing something. Mozilla sticks to the option A here by not doing anything

The jxl-oxide dev is a jxl-rs dev. jxl-oxide is decode only while jxl-rs is a full encode/decode library.

zune also uses jxl-oxide for decode. zune has an encoder and they are doing great work but their encoder is not threading safe so it's not viable for Mozilla's need.

And there's work already being done for properly integrating jxl implementations with firefox but frankly things take time.

If you are seriously passionate about seeing JPEG-XL in firefox there's a really easy solution. Contribute. More engineering hours put towards a FOSS project tends to see it come to fruition faster.

You have a really strange interpretation of the word “consider”.

Seems like the normal usage to me. The post above lists other criteria that have to be satisfied, beyond just being a Rust implementation. That would be the consideration.

Mozilla indicates that they are willing to consider it given various prerequisite. GP translates that to being “more than willing to adopt it”. That is very much not a normal interpretation.

From the link

> To address this concern, the team at Google has agreed to apply their subject matter expertise to build a safe, performant, compact, and compatible JPEG-XL decoder in Rust, and integrate this decoder into Firefox. If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.

That is a perfectly clear position.

How far away is JPEG-XL rust version from Google if Chrome is not interested in it?

You can review it here: https://github.com/libjxl/jxl-rs

Seems to be under very active development.

Now I'm feeling a bit less bad for not using Firefox anymore. Not using it because it's C++ is <insert terms that may not be welcome on HN>

So you think it's silly to not want to introduce new potentially remotely-exploitable CVEs in one of the most important pieces of software (the web browser) on one's computer? Or are you implying those 100k lines of multithreaded C++ code are bug-free and won't introduce any new CVEs?

[deleted]

[flagged]

> and don’t think that the programmer more than the languages contribute to those problems

This sounds a lot like how I used to think about unit testing and type checking when I was younger and more naive. It also echoes the sentiments of countless craftspeople talking about safety protocols and features before they lost a body part.

Safety features can’t protect you from a bad programmer. But they can go a long way to protect you from the inevitable fallibility of a good programmer.

I never said anything about unit testing nor type checking, last time I checked C/C++ are strongly typed but I guess I'm just too naïve to understand.

It's crazy how anti-Rust people think that eliminating 70% of your security bugs[1] by construction just by using a memory-safe language (not even necessarily Rust) is somehow a bad thing or not worth doing.

[1] - https://www.chromium.org/Home/chromium-security/memory-safet...

I'm not anti rust but I'm not drinking it's kool-aid either.

It's not about being completely bug free. Safe rust is going to be reasonably hardened against exploitable decoder bugs which can be converted into RCEs. A bug in safe rust is going to be a hell of a lot harder to turn into an exploit than a bug in bog standard C++.

> It’s crazy how people think using Rust will magically make your code bug and vulnerability free

It won't for all code, and not bug-free, but it absolutely does make it possible to write code parsing untrusted input all-but vulnerability free. It's not 100% foolproof but the track record of Rust parsing libraries is night-and-day better than C/C++ libraries in this domain. And they're often faster too.

Straw-man much?

Nope, not at all actually.

Multiple severe attacks on browsers over the years have targeted image decoders. Requiring an implementation in a memory safe language seems very reasonable to me, and makes me feel better about using FF.

It's not just "C++ bad". It's "we don't want to deal with memory errors in directly user facing code that parses untrusted contents".

That's a perfectly reasonable stance.

I did some reading recently, for a benchmark I was setting up, to try and understand what the situation is. It seems things have started changing in the last year or so.

Some links from my notes:

https://www.phoronix.com/news/Mozilla-Interest-JPEG-XL-Rust

https://news.ycombinator.com/item?id=41443336 (discussion of the same GitHub comment as in the Phoronix site)

https://github.com/tirr-c/jxl-oxide

https://bugzilla.mozilla.org/show_bug.cgi?id=1986393 (land initial jpegxl rust code pref disabled)

In case anyone is curious, here is the benchmark I did my reading for:

https://op111.net/posts/2025/10/png-and-modern-formats-lossl...

No, the situation about image compression has not changed. The Grand Poster you were replying to was writing about typical web usage, that is "medium-to-heavily compressed images", while your benchmark is about lossless compression.

BTW, I don't see how Mozilla's interest in a jpegxl _decoder_ (your first link) has anything to do with the performance of jpegxl encoders compared to avif's encoders. In case you're really interested in the former, Firefox now has more than intentions, but it's still not at production level: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393

No. demetris’ benchmark of lossless image compression is not a sign that the situation may be changing. :-D

That was just the context for some reading I did to understand where we are now.

> BTW, I don't see how Mozilla's interest in a jpegxl _decoder_ (your first link) has anything to do with the performance of jpegxl encoders compared to avif's encoders. In case you're really interested in the former, Firefox now has more than intentions, but it's still not at production level: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393

That is one of the links I shared in my comment (along with the bug title in parenthesis). :-)

I've had exactly the opposite outcome with AVIF vs JPEG-XL. I've found that jxl outperforms AVIF quite dramatically at low bitrates.

Same in my experience testing and deploying a few sites that support both. In general the only time AVIF outperformed in file size for me was with laughably low quality settings beyond what any typical user or platform would choose.

And for larger files especially the benefits of actually having progressive decoding, pushed me even more in favour of jpeg-xl. Doubly so when you can just provide variations in image size by halting the bit flow arbitrarily.

JPEG-XL is optimized for the low to zero levels of compression which isn’t as commonly used on the web, but definitely fills a need.

Google citied insufficient improvements which is a rather ambiguous statement. Mozilla seems more concerned with the attack surface.

JPEG XL seems optimally suited for media and archival purposes and of course this is something you’d want to mostly do through webapps nowadays. Even relatively basic uses cases like Wiki Commons is basically stuck on JPEG for these purposes.

For the same reason it would be good if a future revision of PDF/A would include JPEG XL, since it doesn't really have any decent codecs for low-loss (but not losless) compression (e.g. JPEG sucks at color schematics/drawings and losless is impractically big for them). It did get JP2 but support for that is quite uncommon.

>but in my subjective testing with image sizes you would typically see on the web, avif wins every single time.

What is that in terms of bpp? Because according to Google Chrome 80-85% of we deliver images with bpp of 1.0 or above. I don't think most people realise that.

And in most if not all circumstances, jpeg XL performs better than AVIF at bpp 1.0 and above tested by professionals.

[deleted]

I wish they separated the lossless codec into "WebPNG." WebP is better than PNG, but it's too risky to use (and tell people to use) a lossless format that is lossy if you forget to use a setting.