I looked up the graph they are using
https://openrouter.ai/rankings
It says "Grok Code Fast 1" is ranked first in token usage? That's surprising. Maybe it's just OpenRouter bias or how the LLM is used.
I would have assumed Claude would be #1
I looked up the graph they are using
https://openrouter.ai/rankings
It says "Grok Code Fast 1" is ranked first in token usage? That's surprising. Maybe it's just OpenRouter bias or how the LLM is used.
I would have assumed Claude would be #1
Xai has been offering it for free. Loom at the #1 user. It's kilo code, they've been giving grok code for free for weeks.
Cline too. It was free for a week(end?) and they extended it for a longer time.
It's pretty good tbh. Some quirks but it's efficient at making changes and seems to understand code pretty well.
Everyone I know who uses Claude does not use it through openrouter.
Grok code fast is a legit good model. https://www.youtube.com/watch?v=Y-SyfYXupTQ
Grok 4 was really good in my experience, but it was really slow. Might try the fast version if my Claude runs out of tokens. I stick to Claude because I know the models output patterns and flaws (which are more predictable than GPT-5).
Just yesterday I had Cline+Grok Code Fast fix an issue caused by Claude ... who ran out of credits mid-fix.
(LLMs seem to think Go embeds can use ../../style relative paths, they cannot. And when they notice it doesn't work like that, they use the weirdest shit to try to fix it.)