Anything where there's _at worst_ solder and traces between the compute and the memory. That's why you see it on GPUs (and Apple hardware). DRAMs advantage is modularity.
HBM is just normal DDR RAM that's been packaged with (much) wider-than-usual data buses. That's where the high bandwidth comes from, not from high clock rates or any other innovation or improvement in core specifications.
AMD has built some consumer GPUs in the recent past with HBM - RX Vega and Radeon VII (although I assume not all "HBM" is created equal).
Isn't their APU also capable of doing HBM? There was an Intel AMD hybrid chip that used unified a while back too.
That was not unified. It was just on same package. Functionally it was like if you had a dedicated gpu.
My vega 56 still has 400gb/s of memory which is still insane for how old the card is.
AMD's Hawaii architecture had 320GB/s on a 512b GDDR5 bus in 2013.
The Fiji XT architecture after it had 512GB/S on a 4096b HBM bus in 2015.
The Vega architecture did have 400GB/s or so in 2017, which was a bit of a downgrade.
Anything where there's _at worst_ solder and traces between the compute and the memory. That's why you see it on GPUs (and Apple hardware). DRAMs advantage is modularity.
At least as I understand it.
They can just switch back to normal DRAM if HBM demand drops, you ain't getting it cheap just coz AI flops
HBM is just normal DDR RAM that's been packaged with (much) wider-than-usual data buses. That's where the high bandwidth comes from, not from high clock rates or any other innovation or improvement in core specifications.
Very few applications other than GPUs need HBM.
[dead]