If higher bandwidth networking consisted primarily running more and more ethernet lines in parallel, you would most certainly agree that "networking has stagnated".
"Reasoning" and now "Agentic" AI systems are not some fundamental improvement on LLMs, they're just running roughly the same prior-gen LLMS, multiple times.
Hence the conclusion that LLM improvement has slowed down, if not stagnated entirely, and that we should not expect the improvements of switching to these "reasoning" systems to keep happening.
From TFA:
“ChatGPT came up with an idea which is original and clever. It is the sort of idea I would be very proud to come up with after a week or two of pondering, and it took ChatGPT less than an hour to find and prove”
You misunderstand. I'm not saying that Reasoning/Agentic systems aren't better.
I'm saying they're not an advancement in the tech in the way GPT 1 through 3 were. They're a different kind of improvement.
And as such the rate improvement cannot just be extrapolated into the future.
GPT1 through GPT3 advancement were exactly like using more Ethernet cables in parallel.
All interesting conceptual breakthroughs came after GPT3: RL and reasoning being the main ones.