This is very false DS4 is super cheap. I would advise to begin by reading their release paper. https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...
They introduce very novel methods to improve long context efficiency and attention. HCA & mCH. It requires only 27% of flops for inference and 10% for KV cache than v3.2. This makes it super efficient. Think of this. For flops, we can now serve more than 3x the amount with the same number of compute, and you would need 30% of prior KV cache.
Furthermore, this release is a PREVIEW, DeepSeek is the real open labs and they not only cook up quite a bit with every single release, but they publish and share it. I'm running this locally.
Let me tell you how "CHEAP" this is. With v3.2 I would run out of GPU ram, spill into system ram with 256k context. It ran quite alright and I was happy with my 7tk/sec. With this, I'm 100% in GPU ram with full 1million token, run more than 2x fast while getting better results.
This is super cheap. moonshot has made it clear that they are starved for GPUs and that's why. If they had GPU capacity like we do in US and subsidized the models like we do here, they would be giving it away for free!
> I'm running this locally.
Impressive! What is your setup? Are you running the full DeepSeek V4 Pro, or V4 Flash?
I'm running flash. You can run it under 128gb, so a $3000 strix halo would do. My rig tho is 8 Nvidia gpus and spilling over to system ram.