Depends on how much you value repeatability in testing, and how much compute you have. It's a choice which has been made often in the history of computer science.

The cost of adaptive precision floats can be negligible depending on application. One example I'm familiar with from geometry processing: https://www.cs.cmu.edu/~quake/robust.html

Integer math often carries no performance penalty compared to floating point.

I guess my takeaway from this conversation is that there's a market for fast high-precision math techniques in the AI field.