Qwen scores above sonnet in coding benchmarks. Runs locally. In personal use it's really good. Anecdotally others have used it to vibe code or agentic code successfully. Not toy problems. Not a toy model.
Qwen3.6 raises the bar for models of its size. There really isn't a comparison in my opinion.
College SAT scores do not tell you how the dev applying for your open back end systems engineering job is going to do once they're in your workplace harness.
Nor do class standings, nor hackerrank and the like.
What will tell you is asking them to fix a thing in your codebase. Once you ask an LLM to do that, a dozen times, I'd argue it's no longer "just your opinion man", it's a context-engineered performance x applicability assessment.
And it is very predictive.
But it's also why someone doing well at job A isn't necessarily going to be great at B, or bad at A doesn't mean will necessarily be bad at B.
I've often felt we should normalize a sort of mutual try-buy period where job-change seeker and company can spend a series of days without harming one's existing employment, to derisk the mutual learning. ESPECIALLY to derisk the career change for the applicant who only gets one timeline to manage, opposed to company that considers the applicant fungible.
But back to the LLM, yeah, the only valid opinion on whether it works for you is not benchmark, it's an informed opinion from 'using it in anger'.
That is how you empirically evaluate tools; not by reading stupid benchmarks. By actually using the tools, for hours and hours. Doing real work.
Did you try using it? For hours? Do you use qwen?
How about you tell us about your experience with your great 8B models that you use daily. What coding agent harness do you have then hooked up to? What context size can you get before they lose track of whats happening? Do you swap between models for different coding tasks?
Or, have you not, actually, even actually tried any of this stuff, yourself?
Qwen scores above sonnet in coding benchmarks. Runs locally. In personal use it's really good. Anecdotally others have used it to vibe code or agentic code successfully. Not toy problems. Not a toy model.
Qwen3.6 raises the bar for models of its size. There really isn't a comparison in my opinion.
Having tried it.
Qwen is really good.
Also, generally, it makes sense. 8B models are generally not very good^.
That this 8B model is decent is impressive, but that it could perform on par with a good model 4 times as large is a daydream.
^ - To be polite. The small models + tool use for coding agents are almost universally ass. Proof: my personal experience. Ive tried many of them.
So it’s just like, your opinion, man?
edit: It was a play on The Big Lebowski, folks.
College SAT scores do not tell you how the dev applying for your open back end systems engineering job is going to do once they're in your workplace harness.
Nor do class standings, nor hackerrank and the like.
What will tell you is asking them to fix a thing in your codebase. Once you ask an LLM to do that, a dozen times, I'd argue it's no longer "just your opinion man", it's a context-engineered performance x applicability assessment.
And it is very predictive.
But it's also why someone doing well at job A isn't necessarily going to be great at B, or bad at A doesn't mean will necessarily be bad at B.
I've often felt we should normalize a sort of mutual try-buy period where job-change seeker and company can spend a series of days without harming one's existing employment, to derisk the mutual learning. ESPECIALLY to derisk the career change for the applicant who only gets one timeline to manage, opposed to company that considers the applicant fungible.
But back to the LLM, yeah, the only valid opinion on whether it works for you is not benchmark, it's an informed opinion from 'using it in anger'.
> So it’s just like, your opinion, man?
Yes.
That is how you empirically evaluate tools; not by reading stupid benchmarks. By actually using the tools, for hours and hours. Doing real work.
Did you try using it? For hours? Do you use qwen?
How about you tell us about your experience with your great 8B models that you use daily. What coding agent harness do you have then hooked up to? What context size can you get before they lose track of whats happening? Do you swap between models for different coding tasks?
Or, have you not, actually, even actually tried any of this stuff, yourself?
the (dead) internet is full of opinions exactly like this
you tried qwen3.6 and you think it is not good?
I do not have high opinions of any ai model.
[dead]