Qwen3.6 9B is as good as GPT-4o and runs on my M2 MacBook Air. Models are getting stronger and less costly at the same time, but these are somewhat separate branches of research. Frontier labs are spending more because they are still getting marginal returns and there is more capacity to spend than there was a year ago.
They are intrinsically linked beyond a certain point. If we're making progress but costs are spiraling exponentially then it stands to reason that we will soon reach a point where we can no longer afford the increasing costs and thus progress will slow.
(barring some breakthrough that reduces costs, which of course may happen, but for which recent model improvements are not strong evidence of)
If higher bandwidth networking consisted primarily running more and more ethernet lines in parallel, you would most certainly agree that "networking has stagnated".
"Reasoning" and now "Agentic" AI systems are not some fundamental improvement on LLMs, they're just running roughly the same prior-gen LLMS, multiple times.
Hence the conclusion that LLM improvement has slowed down, if not stagnated entirely, and that we should not expect the improvements of switching to these "reasoning" systems to keep happening.
“ChatGPT came up with an idea which is original and clever. It is the sort of idea I would be very proud to come up with after a week or two of pondering, and it took ChatGPT less than an hour to find and prove”
Until you or I can actually use Mythos in Claude without an nda or other strings attached, Mythos is not released and is just an effective marketing tool for Anthropic.
The cost factors on the new models compared to the old models.
Qwen3.6 9B is as good as GPT-4o and runs on my M2 MacBook Air. Models are getting stronger and less costly at the same time, but these are somewhat separate branches of research. Frontier labs are spending more because they are still getting marginal returns and there is more capacity to spend than there was a year ago.
Qwen 3.6 9B doesn't exist.
If you meant 3.5 9B and you truly believe it's as good as 4o then I can only assume you have a very basic use case.
Cost for a specific level of performance decreases 10x per year, this has been a pretty consistent property for awhile now.
You are mixing cost and progress. It’s not because it’s more and more expensive that progress is slowing down by itself.
They are intrinsically linked beyond a certain point. If we're making progress but costs are spiraling exponentially then it stands to reason that we will soon reach a point where we can no longer afford the increasing costs and thus progress will slow.
(barring some breakthrough that reduces costs, which of course may happen, but for which recent model improvements are not strong evidence of)
Investment dollars.
Source for that claim?
Nobody is releasing NEW models
…not only is this not true but it also doesn’t matter. Why would this indicate performance saturating?
What constitutes a NEW model for the purposes of calculating progress?
The standard networking connection has been called “Ethernet” for more than thirty years, so networking has stagnated, right?
If higher bandwidth networking consisted primarily running more and more ethernet lines in parallel, you would most certainly agree that "networking has stagnated".
"Reasoning" and now "Agentic" AI systems are not some fundamental improvement on LLMs, they're just running roughly the same prior-gen LLMS, multiple times.
Hence the conclusion that LLM improvement has slowed down, if not stagnated entirely, and that we should not expect the improvements of switching to these "reasoning" systems to keep happening.
From TFA:
“ChatGPT came up with an idea which is original and clever. It is the sort of idea I would be very proud to come up with after a week or two of pondering, and it took ChatGPT less than an hour to find and prove”
You misunderstand. I'm not saying that Reasoning/Agentic systems aren't better.
I'm saying they're not an advancement in the tech in the way GPT 1 through 3 were. They're a different kind of improvement.
And as such the rate improvement cannot just be extrapolated into the future.
GPT1 through GPT3 advancement were exactly like using more Ethernet cables in parallel.
All interesting conceptual breakthroughs came after GPT3: RL and reasoning being the main ones.
What? DeepSeekV3 just came out and is incredible for the price. Mythos is also half-released.
Until you or I can actually use Mythos in Claude without an nda or other strings attached, Mythos is not released and is just an effective marketing tool for Anthropic.