When people talk about the “AI bubble popping” this is what they mean. It is clear that AI will remain useful, but the “singularity is nigh” hype is faltering and the company valuations based on perpetual exponential improvement are just not realistic. Worse, the marginal improvements are coming at ever higher resource requirements with each generation, which puts a soft cap on how good an AI can be and still be economical to run.

What are you basing that on? Haiku 4.5 just came out and beats Sonnet 4 at a third the cost.

GPT-5 and GPT-5-codex are significantly cheaper than the o-series full models from OpenAI, but outperform them.

I won't get into whether the improvements we're seeing are marginal or not, but whether or not that's the case, these examples clearly show you can get improved performance with decreasing resource cost as techniques advance.

> I won't get into whether the improvements we're seeing are marginal or not

But that's exactly the problem!

Right now, AI performs poorly enough that only a small fraction of users is willing to pay money for it, and (despite tech companies constantly shoving it in everyone's face) a large portion of the user base doesn't even want to adopt it for free.

You can't spend hundreds of billions of dollars on marginal improvements in the hope that it'll hopefully eventually become good enough for widespread adoption. Nobody is going to give OpenAI a trillion dollars to grow their user base 50x over the next 15 years. They are going to need to show significant improvements - and soon, or the bubble will pop.

>When people talk about the “AI bubble popping” this is what they mean.

You mean what they have conceded so far to be what they mean. Every new model they start to see that they have to give up a little more.