But RISC-V is a _new_ ISA. Why did we start out with the wrong design that now needs a bunch of extensions? RISC-V should have taken the learnings from x86 and ARM but instead they seem to be committing the same mistakes.
But RISC-V is a _new_ ISA. Why did we start out with the wrong design that now needs a bunch of extensions? RISC-V should have taken the learnings from x86 and ARM but instead they seem to be committing the same mistakes.
I was a bit shocked by headline, given how poorly ARM and x86 compares to RISC-V in speed, cost, and efficiency ... in the MCU space where I near-exclusively live and where RISC-V has near-exclusively lived up until quite recently. RISC-V has been great for RTOS systems and Espressif in particular has pushed MCUs up to a new level where it's become viable to run a designed-from-scratch web server (you better believe we're using vector graphics) on a $5 board that sits on your thumb, but using RISC-V in SBCs and beyond as the primary CPU is a very different ballgame.
I have a couple c3 I was playing with. Are you talking about the P4 or C6? Aren't their xtensa offerings still faster?
It's not the wrong design; RISC-V is designed around extensions, and they left room in the instruction encoding for them. They don't have a 800-lb gorilla like Intel shoving the ISA down customers' throats (Canonical is the closet thing) so there is some debate on which combination of extensions are needed for desktop apps.
FWIW I wrote this article a while back all about RISC-V extensions and how they work at a low level: https://research.redhat.com/blog/article/risc-v-extensions-w... page 22 in this PDF: https://research.redhat.com/wp-content/uploads/2023/12/RHRQ_...
> They don't have a 800-lb gorilla like Intel shoving the ISA down customers' throats
Nobody really forces you to use x64 if you don't like it, just as nobody forced you to use Itanium — which Intel famously failed to "shove down the customers' throats" btw.
It is a reduced instruction set computing isa of course. It shouldn't really have instructions for every edge case.
I only use it for microcontrollers and it's really nice there. But yeah I can imagine it doesn't perform well on bigger stuff. The idea of risc was to put the intelligence in the compiler though, not the silicon.
> It shouldn't really have instructions for every edge case.
Depends on what the instruction does. If it goes through a four-loads-four-stores chain that VAXen could famously do (with pre- and post-increments), then sure, this makes it impossible to implements such ISA in a multiscalar, OOO manner (DEC tried really, really hard and couldn't do it). But anything that essentially bit-fiddles in funny ways with the 2 sets of 64 bits already available from the source registers, plus the immediate? Shove it in, why not? ARM has bit shifted immediates available for almost every instruction since ARMv1. And RISC-V also finally gets shNadd instructions which are essentially x86/x64's SIB byte, except available as a separate instruction. It got "andn" which, arguably, is more useful than pure NOT anyway (most uses of ~ in C are in expressions of "var &= ~expr..." variety) and costs almost nothing to implement. Bit rotations, too, including rev8 and brev8. Heck, we even got max/min instructions in RISC-V because again, why not? The usage is incredibly widespread, the implementation is trivial, and makes life easier both for HW implementers (no need to try to macrofuse common instruction sequences) and the SW writers (no need to neither invents those instruction sequences and hope they'll get accelerated nor read manufacturers datasheets for "officially" blessed instruction sequences).
As proven by x86/x64 and ARM evolution, being all in into pure RISC doesn't pay off, because there is only so much compilers can do in a AOT deployment scenario.
> The idea of risc was to put the intelligence in the compiler though, not the silicon.
Itanium did this mistake. Sure, compilers are much better now, but still dynamic scheduling beats static one for real-world tasks. You can (almost perfectly) statically schedule matrix multiplication but not UI or 3D game.
Even GPUs have some amount of dynamic scheduling now.
It was kind of an experiment from start. Some ideas turned out to be good, so we keep them. Some ideas turned out not to be good, so we fix them with extensions.
The problem with hardware expirements is that people owning the hardware are stuck with experiments.
Sure, but if you bought a dev board with an experimental ISA I think you knew what you were getting in to.
If your hardware is new, you get the nicest extensions though. You just don’t use the bad parts in your code.
Sure, if you are developing software for the computer you own, instead of supporting everyone.
I mean, that is often what you do in embedded computing: you (re)sell hardware with one particular application.
It's hard to imagine a student putting together a RVA23 core in a single semester. And you don't really want that in the embedded roles RISC-V has found a lot of success in either.
Relatively new, we're about 16 years down the road.
Intentionally. Back then the guys were telling that everything could be solved by raw power.