RISC-V does not have the pitfalls of experimental ISAs from 45 years ago, but it has other pitfalls that have not existed in almost any ISA since the first vacuum-tube computers, like the lack of means for integer overflow detection and the lack of indexed addressing.

Especially the lack of integer overflow detection is a choice of great stupidity, for which there exists no excuse.

Detecting integer overflow in hardware is extremely cheap, its cost is absolutely negligible. On the other hand, detecting integer overflow in software is extremely expensive, increasing both the program size and the execution time considerably, because each arithmetic operation must be replaced by multiple operations.

Because of the unacceptable cost, normal RISC-V programs choose to ignore the risk of overflows, which makes them unreliable.

The highest performance implementations of RISC-V from previous years were forced to introduce custom extensions for indexed addressing, but those used inefficient encodings, because something like indexed addressing must be in the base ISA, not in an extension.

OK, look.

Since my previous attempt to measure the impact of trap on signed overflow didn't seem to have moved your position one bit, I thought I'd give it a go in the most representable way I could think of:

I build the same version of clang on a x86, aarch64 and RISC-V system using clang. Then I build another version with the `-ftrapv` flag enabled and compared the compiletimes of compiling programs using these clang builds running on real hardware:

    runtime:         x86         | aarch64                    | RISC-V (RVA23)
                     Zen1        |  A78          A55*         |  X100         A100  !!! all cores clocked to about 2.2GHz, Zen1 can reach almost 4GHz
    clang A:         3.609±0.078 |  4.209±0.050   9.390±0.029 |  5.465±0.070  11.559±0.020
    clang-ftrapv A:  3.613±0.118 |  4.290±0.050   9.418±0.056 |  5.448±0.060  11.579±0.030
    clang B:         8.948±0.100 | 10.983±0.188  22.827±0.016 | 13.556±0.016  28.682±0.023
    clang-ftrapv B:  8.960±0.125 | 11.099±0.294  22.802±0.039 | 13.511±0.018  28.741±0.050


As you can see, once again the overhead of -ftrapv is quite low.

Suprizinglt the -ftrapv overhead seems the highest on the Cortex-A78. My guess is that this because clang generates a seperate brk with unique immediate for every overflow check, while on RISC-V it always branches to one unimp per function.

Please tell me if you have a better suggestion for measuring the real world impact.

Or heck, give me some artificial worst case code. That would also be an interesting data point.

Notes:

* The format is mean±variance

* Spacemit X100 is a Cortex-A76 like OoO RISC-V core and A100 an in-order RISC-V core.

* I tried to clock all of the cores to the same frequency of about 2.2GHz. *Except for the A55, which ran at 1.8GHz, but I linearly scaled the results.

* Program A was the chibicc (8K loc) compiler and program B microjs (30K loc).

    binary size:
                  x86        aarch64    RISC-V
    clang:        212807768  216633784  195231816
    clang-ftrapv: 212859280  216737608  195419512
    increase:     0.24%      0.047%     0.09%

> On the other hand, detecting integer overflow in software is extremely expensive, increasing both the program size and the execution time considerably,

Most languages don't care about integer overflow. Your typical C program will happily wrap around.

If I really want to detect overflow, I can do this:

    add t0, a0, a1
    blt t0, a0, overflow
Which is one more instruction, which is not great, not terrible.

Because the other commenter wasn’t posting the actual answer, I went to find the documentation about checking for integer overflow and it’s right here https://docs.riscv.org/reference/isa/unpriv/rv32.html#2-1-4-...

And what did I find? Yep that code is right from the manual for unsigned integer overflow.

For signed addition if you know one of the signs (eg it’s a compile time constant) the manual says

  addi t0, t1, +imm
  blt t0, t1, overflow
But the general case for signed addition if you need to check for overflow and don’t have knowledge of the signs

  add t0, t1, t2
  slti t3, t2, 0
  slt t4, t0, t1
  bne t3, t4, overflow
From what I’ve read most native compiled code doesn’t really check for overflows in optimised builds, but this is more of an issue for JavaScript et al where they may detect the overflow and switch the underlying type? I’m definitely no expert on this.

A bit more reading shows there's a three instruction general case version for 32-bit additions on the 64-bit RISC-V ISA. I'm not familiar with RISC-V assembly and they didn't provide an example, but I _think_ it's as easy as this since 64-bit add wouldn't match the 32-bit overflowed add.

  add t0, t1, t2
  addw t3, t1, t2
  bne t0, t3, overflow

Contrast with x86:

    add eax, ecx
    jo overflow

Neither x86-64 nor RISC-V is implemented by running each single instruction. They both recognize patterns in the code and translate those into micro-ops. On high performance chips like Rivos's (now Meta's) I doubt there'd be any difference in the amount of work done.

Code size is a benefit for x86-64 however - no one is arguing that - but you have to trade that against the difficulty of instruction decoding.

That is not the correct way to test for integer overflow.

The correct sequence of instructions is given in the RISC-V documentation and it needs more instructions.

"Integer overflow" means "overflow in operations with signed integers". It does not mean "overflow in operations with non-negative integers". The latter is normally referred as "carry".

The 2 instructions given above detect carry, not overflow.

Carry is needed for multi-word operations, and these are also painful on RISC-V, but overflow detection is required much more frequently, i.e. it is needed at any arithmetic operation, unless it can be proven by static program analysis that overflow is impossible at that operation.

It's one more instruction only if you don't fuse those instructions in the decoder stage, but as the pattern is the one expected to be generated by compilers, implementations that care about performance are expected to fuse them.

I have no idea or practical experience with anything this low-level, so idk how much following matters, it's just someone from the crowd offering unvarnished impressions:

It's easy to believe you're replying to something that has an element of hyperbole.

It's hard to believe "just do 2x as many instructions" and "ehhh who cares [i.e. your typical C program doesn't check for overflow]", coupled to a seemingly self-conscious repetition of a quip from the television series Chernobyl that is meant to reference sticking your head in the sand, retire the issue from discussion.

There was no hyperbole in what I have said.

The sequence of instructions given above is incorrect, it does not detect integer overflow (i.e. signed integer overflow). It detects carry, which is something else.

The correct sequence, which can be found in the official RISC-V documentation, requires more instructions.

Not checking for overflow in C programs is a serious mistake. All decent C compilers have compilation options for enabling checking for overflow. Such options should always be used, with the exception of the functions that have been analyzed carefully by the programmer and the conclusion has been that integer overflow cannot happen.

For example with operations involving counters or indices, overflow cannot normally happen, so in such places overflow checking may be disabled.

> On the other hand, detecting integer overflow in software is extremely expensive

this just isn't true. both addition and multiplication can check for overflow in <2 instructions.

Fewer than two is exactly one instruction. Which?

dammmit I meant <=2. https://godbolt.org/z/4WxeW58Pc sltu or snez for add/multiply respectively.

This result is misleading.

First, the code claims to be returning "unsigned long" from each of these functions, but the value will only ever be 0 or 1 (see [1]). The code is actually throwing away the result and just returning whether overflow occurred. If we take unsigned long *c as another argument to the function, so that we actually keep the result, we end up having to issue an extra instruction for multiplication (see [2]; I'm ignoring the sd instruction since it is simply there to dereference the *c pointer and wouldn't exist if the function got inlined).

Second, this is just unsigned overflow detection. If we do signed overflow detection, now we're up to 5 instructions for add and mul (see [3]). Considering that this is the bigger challenge, it compares quite unfavorably to architectures where this is just 2 instructions: the operation itself and a branch against a condition flag.

[1]: https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins...

[2]: https://godbolt.org/z/7rWWv57nx

[3]: https://godbolt.org/z/PnzKaz4x5

That's fair. The good news is that for signed overflow, you can claw back to the cost of unsigned overflow if you know the sign of either argument (which is fairly common).

Yeah, it's not the end of the world, and as others mentioned, a good implementation can recognize the instruction pattern and optimize for it.

It's just a bizarre design choice. I understand wanting to get rid of condition flags, but not replacing them with nothing at all.

EDIT: It seems the same choice was made by MIPS, which is a clear inspiration for RISC-V.

The argument is that there are actually 3 distinct forms of replacement:

1. 64 bit signed math is a lot less overflow vulnerable than the 16/32 bit math that was extremely common 20 years ago

2. For the BigInt use-case, the Riscv design is pretty sensible since you want the top bits, not just presence of overflow

3. You can do integer operations on the FPU (using the inexact flag for detecting if rounding occurred).

4. Adding overflow detecting instructions can easily be done in an extension in the future if desired.

I think in the case of MIPS, at least, the decision logic was simply: condition flags behave like an implicit register, making the use of that register explicit would complicate the instruction encoding, and that complication would be for little benefit since most compilers ignore flags anyway, except for situations which could be replaced with direct tests on the result(s).

[deleted]

[flagged]

+1 -- misinformation is best corrected quickly. If not, AI will propagate it and many will believe the erroneous information. I guess that would be viral hallucinations.

One can quickly correct misinformation without being rude. It's not hard, and does not lessen the impact of the correction to do so. There's no reason to tolerate the kind of rudeness the parent post exhibits.