If I find myself reaching a point where I would need to deal with ABIs and binary compatiblity, I pretty much stop there and say "is my workload so important that I need to recompile half the world to support it" and the answer (for me) is always no.
Well handling OS-dependent binary dependency is still unsolved because of the intricate behavior of native libraries and especially how tightly C and C++ compilers integrate with their base operating systems. vcpkg, Conan, containers, Yocto, Nix all target a limited slice of it. So there is not a fully satisfactory solution. Pixi comes very close though.
Conda ecosystem is forced to solve this problem to a point since ML libraries and their binary backends are terrible at keeping their binaries ABI-stable. Moreover different GPUs have different capabilities and support different versions of the GPGPU execution engines like CUDA. There is no easy way out without solving dependency hell.
If you’re writing code for an accelerator, surely you care enough to make sure you can properly target it?