For lossless, bitdepth of course does matter. Lossless image compression is storing a 2D array of integer numbers exactly, and with higher bitdepth, the range of those numbers grows (and the amount of hard-to-compress least significant bits grows).
The OP article is talking about lossy compression.
When comparing lossy compression, note that lossy compression settings are not a "percent" of anything, it's just an arbitrary scale that depends on the encoder implementation. So lossy "80%" is certainly not the same thing between JPEG and JXL, or between Photoshop and ImageMagick, etc. It's not a percentage of anything — it's just an arbitrary scale that gets mapped to encoder parameters (e.g. quantization tables) in some arbitrary way.
The best way to compare lossy compression performance is to encode an image at the quality that is acceptable for your use case (according to your eyes), and then you just look for various codecs/encoders what the lowest filesize is you can get while still getting an acceptable quality.