If it's intelligence + speed you want, nothing comes close to GPT-OSS-120B on Cerebras or Groq.

However, this looks like it has great potential for cost-effectiveness. As of today it's free to use over API on OpenRouter, so a bit unclear what it'll cost when it's not free, but free is free!

https://openrouter.ai/nvidia/nemotron-3-nano-30b-a3b:free

> nothing comes close to GPT-OSS-120B on Cerebras

That's temporary. Cerebras speeds up everything, so if Nemotron is good quality, it's just a matter of time until they add it.

That's unlikely. Cerebras doesn't speed up everything. Can it speed up everything? I don't know, I'm not an insider. But does it speed up everything? That is evidently not the case. Their page [1] lists only 4 production models and 2 preview models.

[1] https://inference-docs.cerebras.ai/models/overview

They need to compile the model for their chips. Standard transformers are easier, so GPT-OSS, Qwen, GLM, etc if there is demand, they will deploy it.

Nemotron on the other hand is a hybrid (Transformer + Mamba-2) so it will be more challenging to compile it on Cerebras/Groq chips.

(Me thinks Nvidia is purposefully picking architecture+FP4 that is easy to ship on Nvidia chips, but harder for TPU or Cerebras/Groq to deploy)