First, the “optimizations” are not IEEE 754 compliant. So nondeterminism with floating-point operations is not an inherent property of using floating-point arithmetics, it’s a consequence of disregarding the standard by deliberately opting in to such nondeterminism.

Secondly, as I quoted the paper is explicitly making the point that there is a source of nondeterminism outside of the models and GPUs, hence ensuring that the floating-point arithmetics are deterministic doesn’t help.