I believe you've misread - the Nvidia article and your quote support my point. Only by disabling the fp optimizations, are the authors are able to stop the inaccuracies.
I believe you've misread - the Nvidia article and your quote support my point. Only by disabling the fp optimizations, are the authors are able to stop the inaccuracies.
First, the “optimizations” are not IEEE 754 compliant. So nondeterminism with floating-point operations is not an inherent property of using floating-point arithmetics, it’s a consequence of disregarding the standard by deliberately opting in to such nondeterminism.
Secondly, as I quoted the paper is explicitly making the point that there is a source of nondeterminism outside of the models and GPUs, hence ensuring that the floating-point arithmetics are deterministic doesn’t help.