The way to do it _today_ requires enormous amounts of HBM! However, we've never designed inference accelerators, which is actually a quite "trivial" problem, but we've just never had a need.

Groq (acqui-hired by NVidia) came up with a different processor architecture: metric shit-tons of SRAM attached to a modest single core deterministic processor. No HBM needed on this card, and 32x faster inference than today's best GPUs at inference!

These LPUs are pretty useless for training though, which is useful for companies training models! Training is expensive, inference is cheap (someday, not now).

There's also a Canadian company that _literally burned the model as a silicon mask_ on a chip. It's unbelievably (1000x) fast, but not flexible of course: https://chatjimmy.ai

The point is metric shit-tons of SRAM is still large amounts of expensive memory.

SRAM and HBM are two completely different things though... SRAM is what your L1,L2,L3 caches are made of (most of the time, asterisks exist). This is something we've been doing for years and is a proven technology thats unbelievably cheap. It's all part of the processor.

HBM are their own chips and dies.