On my (single) AMD 3950x running entirely in CPU (llama -t32 -dev none), I was getting 14 tokens/s running Qwen3-Coder-30B-A3B-Instruct-IQ4_NL.gguf last night. Which is the best I've had out of a model that doesn't feel stupid.
On my (single) AMD 3950x running entirely in CPU (llama -t32 -dev none), I was getting 14 tokens/s running Qwen3-Coder-30B-A3B-Instruct-IQ4_NL.gguf last night. Which is the best I've had out of a model that doesn't feel stupid.
For reference, I get 29 tokens/s with the same model using 12 threads on AMD 9950X3D. Guess it is 2x faster because AVX-512 is 2x faster on Zen 5, roughly speaking. Somewhat unexpectedly, increasing number of threads decreases performance, 16 threads already perform slightly worse and with 32 threads I only get 26.5 tokens/s.
On 5090 same model produces ~170 tokens/s.
How much RAM it is using by the way? I see 30B, but without knowing precision it is unclear how much memory one needs.
Q4 is usually around 4.5 bits per parameter but can be more as some layers are quantised to a higher precision, which would suggest 30 billion * 4.5 bit = 15.7GB, but the quant the GP is using is 17.3GB and 19.7GB for the article. Add around 20-50% overhead for various things and then some % for each 1k of tokens in the context and you're probably looking at no more than 32GB. If you're using something like llama.cpp which can offload some of the model to the GPU you'll still get decent performance even on a 16gb VRAM GPU.
Sounds close! top says my llama is using 17.7G virt, 16.6G resident with: ./build/bin/llama-cli -m /discs/fast/ai/Qwen3-Coder-30B-A3B-Instruct-IQ4_NL.gguf --jinja -ngl 99 --temp 0.7 --min-p 0.0 --top-p 0.80 --top-k 20 --presence-penalty 1.0 -t 32 -dev none