Yeah, it's not the end of the world, and as others mentioned, a good implementation can recognize the instruction pattern and optimize for it.

It's just a bizarre design choice. I understand wanting to get rid of condition flags, but not replacing them with nothing at all.

EDIT: It seems the same choice was made by MIPS, which is a clear inspiration for RISC-V.

The argument is that there are actually 3 distinct forms of replacement:

1. 64 bit signed math is a lot less overflow vulnerable than the 16/32 bit math that was extremely common 20 years ago

2. For the BigInt use-case, the Riscv design is pretty sensible since you want the top bits, not just presence of overflow

3. You can do integer operations on the FPU (using the inexact flag for detecting if rounding occurred).

4. Adding overflow detecting instructions can easily be done in an extension in the future if desired.

I think in the case of MIPS, at least, the decision logic was simply: condition flags behave like an implicit register, making the use of that register explicit would complicate the instruction encoding, and that complication would be for little benefit since most compilers ignore flags anyway, except for situations which could be replaced with direct tests on the result(s).