The Radeon VII came out in 2019 as a $700 consumer GPU with an 1TB/s HBM2 memory subsystem which is more than any consumer GPU you can get today, including the high-end ones afaik. At that point in time, there was a whole lineup of AMD GPUs with HBM going down into the midrange.

If they could make this stuff and sell it to regular people a decade ago for very palatable prices, why do they come up with the idea that this is the technology of the gods, unaffordable by mere mortals?

> why do they come up with the idea that this is the technology of the gods, unaffordable by mere mortals?

because the gods want it all and are willing to pay top dollar.

Isn't this the case of money going from left pocket to the right, since these companies are owned by the same investment funds?

I wonder whether this is some kind of a racket.

"Owned"? You mean they invested

Investors are owners, yes.

> Isn't this the case of money going from left pocket to the right, since these companies are owned by the same investment funds?

No.

I have been wondering this recently. It was the convention that if you wanted to keep costs down, try to keep the memory bus size down as low as possible. Still remember the awful Radeon 9200 SE - 64bit data bus that strangled an already slow GPU.

Heck, I have a phone with a 16bit memory bus for instance. The high(ish) clock rate only makes up the difference slightly.

But with general prices on all components going up, it might not be such a big factor any more.

HBM migght make sense for higher end products which can free up space for the lower end that will never use the tech.

Eh I feel like the memory bus width thing was more a case of binning memory controllers and the like.

Designing a part with a wide bus and putting the traces down on the board is what I would expect to be the easy part these days (surely).

But yield, yield comes for us all.

[dead]

[deleted]

Supply and demand. The prices are high because of high demand.

And the bottleneck at the time was HBM interposers, not actual ram dies.

I was gonna say, I still use an AMD Vega that uses HBM2.

Vega was a card with decent perf/$ for the consumer, but from a pure technical point of view (perf/mm2, perf/BW, perf/W) it was a major failure. Both Vega (and Fiji before it) showed that excess memory BW alone is not sufficient to win.

> Both Vega (and Fiji before it) showed that excess memory BW alone is not sufficient to win.

That's correct if you're targeting gamers, but local AI inference changes this picture substantially.

My main system still uses a Vega 64 and it plays all the games I'd care about. Undervolts like a champ! Will use it until it dies..

> 1TB/s HBM2 memory subsystem which is more than any consumer GPU you can get today

5090 has 1.8 TB/s?

5090 is an overpriced outlier. A typical consumer GPU, like RTX 5070, has a 3-times lower memory throughput.

Even a RTX 5080 has a lower memory throughput than a Radeon VII from 2019, 7 years ago, while being much more expensive.

The memory throughput of GPUs per dollar has regressed greatly during the last 5 years, despite the fact that the widths of the GPU memory interfaces have been reduced, in order to decrease the production costs.

RTX 5080 has a 256-bit memory interface, while the much cheaper Radeon VII had an 1024-bit memory interface. RTX 5080 has almost 4-times faster memories than Radeon VII, but it has not used this to increase the memory throughput, but only to reduce the production costs, while simultaneously increasing the product price.

> Even a RTX 5080 has a lower memory throughput than a Radeon VII from 2019, 7 years ago, while being much more expensive.

And it's faster for gaming, I guess? Which is what matters for the typical user.

Anyway you can buy much faster GPUs now than in 2019. They are also much more expensive, yes.

Modern GPUs like RTX 5080 are much faster for the applications that are limited by computational capabilities, mainly because they have more execution units, whose clock frequencies have also increased.

I suppose that most games are limited by computation, so they are indeed much faster on modern GPUs.

However, there are applications that are limited by memory throughput, not by computation, including AI inference and many scientific/technical computing applications.

For such applications, old GPUs with higher memory throughput are still faster.

This is why I am still using an old Radeon VII and a couple of other ancient AMD GPUs with high memory throughput.

Last year I have bought an Intel GPU, which is still slower than my old GPUs, but it at least had very good performance per dollar, competitive with that of the old GPUs, because it was very cheap, while the current AMD and especially NVIDIA GPUs have poor performance per dollar.

then it must be the case you can't get one (for a fair price?)

Define "fair price"

5090s are certainly expensive compared to most other GPUs, but not expensive enough to be unobtanium for nearly any professional who could utilize one as part of their job

Hell, some of us utilize them just to play video games!

That card only had 16GB of memory; its memory bandwidth was 1TB/s.

The Pro variant had 32GB, I had one in a 2019 Mac Pro

You're saying this in a world where AMD's highest end consumer GPU in 2026 is also limited to 16 GB.

7900XT has 20GB and you can still get some unused ones.

R9700 has 32GB and is cheaper than most NVidia consumer GPUs, even though it's a "pro".

And I can still buy a new W7800 48GB for a relatively decent price.

RX7900 XTX has 24GB

this card is 4 years old, it's not on store shelves anymore.

FWIW that depends on the stores you're looking at. There are three models from different manufacturers available here in a few shops. The prices are a bit ouchier than what i paid for mine around Christmas 2024 though (i got mine on a sale).

You can still get "new" ones on amazon in europe.

It also does 64 bit floating point I think?

After NVIDIA essentially removed FP64 from consumer GPUs (their 1:64 performance ratio is worse than what you can obtain by software emulation, so it is useless, except for testing programs intended to run on datacenter GPUs), AMD persisted for a few years, but then they also followed NVIDIA.

AMD Hawaii GPUs still had 1:2 FP64:FP32, while the consumer variant of Radeon VII dropped to 1:4. The following AMD consumer GPUs dropped the FP64 performance to levels that are not competitive with CPUs.

Nowadays the only consumer GPUs with decent FP64 performance are the Intel Battlemage GPUs, which have a 1:8 performance ratio, which provides very good performance per dollar.

Class war by the Epstein People.

What do you mean exactly by “Epstein People”? I thoroughly hope it’s not the 4chan definition of it

A RAM discussion descends into cranks referring to "Epstein People" and you're the one downvoted - website's absolutely cooked.

Only indirectly. They have most of the money, so if they want something that’s in short supply, the price will rise to the point that it becomes unaffordable to everyone else.

Reason number 7,322 why US-style ultracapitalism is self-destructive, anti-social, and dystopian.