> slow not due to mem bandwidth, but due to PCIe bandwidth, which is the bottleneck.
> On server/workstation motherboards ... the memory throughput [to system RAM] achievable by the GPU becomes a very small fraction of the system memory bandwidth.
Yes, this is a critical point. It means that this is only realistically useful for prefill, which is compute- and not memory-bandwidth bound.
Sorry, I'm a bit of a noob on llm. What is "prefill"? As opposed to what?
Prefill - module computes KV cache over input toks, up to the last token in your input (the 'prompt'), at which point it can then begin -
Decode - the model chooses a new token to append to the end of the current token list (i.e. it generates a token), then computes the new tokens KVs.
Decode is basically prefill 1 tok -> add 1 tok -> prefill 1 more tok -> ....
but in the initial prefill stage it doesn't need to do generation, since you've provided the toks.
And Incidentally prefill would also be how caching,say, a system prompt saves you some $ for API usage with LLM providers. They only compute the kv cache for the new tokens after the system prompt.