But local models will never compete quality-wise with frontier models in a data center.
Yeah, kimi k2 apparently requires 2TB of vram to run[1], and trails the proprietary models in terms of intelligence. There's no world where people are going to be replacing chatgpt or claude code with a local model.
[1] https://www.canirun.ai/model/kimi-k2
Never say never. Besides, they just need to be "good enough".
Yeah, kimi k2 apparently requires 2TB of vram to run[1], and trails the proprietary models in terms of intelligence. There's no world where people are going to be replacing chatgpt or claude code with a local model.
[1] https://www.canirun.ai/model/kimi-k2
Never say never. Besides, they just need to be "good enough".