When exporting images from Lightroom Classic in JPEX XL you can choose the percent of compress or choose lossless which disable that of course. But also default to 8bit, but an option for 16bit which of course results in a much larger file. And color profile setting. So curious what they mean by it ignores bit depth?

Did some sample exports comparing JXL 8bit lossless vs JPG and JXL was quite a bit bigger. Same for doing lossy 100 comparison or 99 comparison of both. When setting JXL to 80%, 70% see noticeably savings but had thought the idea was JXL full quality essentially for much smaller sizes.

To be fair the 70% does look very similar to 100% but then again the JPEG 70% vs 100% also look very similar on an Apple XDR Monitor. the 70% or 80% etc on both jpeg and jpeg xl i do see visual differences in areas like on shoes where there is mesh.

JXL comes with lots of compatibility challenges since while things were picking up with Apple's adoption it seems to have halted since and apps like Evoto, and Topaz not adding support among many others. And Apple's still not full support and no progress on that. So unless Chrome does a 180 again, think AVIF and JXL will both end up stagnating and most sticking with JPG. For Tiff though noticed significant savings lossless jxl compared to tiff so that would be a good use case except tiffs more likely ones to be edited by third party apps that most likely won't support the format.

For lossless, bitdepth of course does matter. Lossless image compression is storing a 2D array of integer numbers exactly, and with higher bitdepth, the range of those numbers grows (and the amount of hard-to-compress least significant bits grows).

The OP article is talking about lossy compression.

When comparing lossy compression, note that lossy compression settings are not a "percent" of anything, it's just an arbitrary scale that depends on the encoder implementation. So lossy "80%" is certainly not the same thing between JPEG and JXL, or between Photoshop and ImageMagick, etc. It's not a percentage of anything — it's just an arbitrary scale that gets mapped to encoder parameters (e.g. quantization tables) in some arbitrary way.

The best way to compare lossy compression performance is to encode an image at the quality that is acceptable for your use case (according to your eyes), and then you just look for various codecs/encoders what the lowest filesize is you can get while still getting an acceptable quality.