I feel that the recent iterations of LLM haven't provided an intuitive qualitative leap. Have they entered a bottleneck period so quickly?

Azure recently discontinued the gpt-4.1 model. I had to move off of this model, and moving to any gpt-5* model was worse (higher failures & less accuracy), and more expensive. I had to rewrite the entire system from high school level prompts to lower elementary school level prompts using non-gpt models.

I would say models entered a bottleneck a long time ago. My personal opinion is now they are overfitting newer models on coding and "agentic" capabilities at great expense of general abilities in other domains.

For what is worth I find GPT 5.5 qualitatively different than 5.4 and 5.3

If I had to collapse the nature of the difference in one sentence it'd be that the 5.5 does more what I'm asking it to do versus doing a small aspect of what I'm asking then stopping.

5.4 required a lot of "continue" encouragement. 5.5 just "gets it" a bit more

What is boils down to for me is that even though it's more expensive I would much rather use 5.5 on low then 5.4/5.3 on high/medium

[dead]

I am delighted to see the ceiling on small models exponentially increase. I think the "make models unsustainably large because the benchmark improved by 1%" practice is ending. I think the thing boosting small models will be the thing that makes LLMs actually useful. The main thing is research.

They likely entered the same compute constraint scenario as Anthropic.

IE. They had 100 compute units. Demand is 200 units. They have to do a combination of buying more compute, increasing price, lowering limits, etc.

You mean the company that just doubled their rate limits? https://www.anthropic.com/news/higher-limits-spacex

Bunch of nonsense.

If that is true then they should all invest resources into projects that will yield efficient use of the compute. The most efficient producer then gains a huge cost advantage AND capacity to serve more… so yeah.. that logic doesn’t hold.

capitalism convinced you that line goes up unless you dont let it eat all the resources.

Considering my use case (web apps), there already wasn't anything I couldn't do with Opus 4.5, the same will be true or were already true for more people in other releases, and at some point, which may have already passed, most people will stop finding qualitative leaps.

This doesn't always mean that there is a bottleneck in terms of raw power, it may also mean that your use cases (or the lower hanging fruits among them) are already covered.

Are you running gpt-5.5 on xhigh reasoning? Because I'm seeing a clear difference between that and gpt-5.4 on xhigh.

My take is that demand is also increasing, so maybe they are making incremental improvements to model quality while focusing on improving inference costs. Prices are increasing though because even if they achieve a very efficient model, they are still selling at a loss.

> Have they entered a bottleneck period so quickly?

So quickly - this industry has had trillions thrown around to get here so quickly, heh.

But, yes, capability seems somewhat stagnant. It's about ISO perf and cost improvements or iso cost and perf improvements + agentic.

its a sigmoid, not a bottleneck.