What are you basing that on? Haiku 4.5 just came out and beats Sonnet 4 at a third the cost.
GPT-5 and GPT-5-codex are significantly cheaper than the o-series full models from OpenAI, but outperform them.
I won't get into whether the improvements we're seeing are marginal or not, but whether or not that's the case, these examples clearly show you can get improved performance with decreasing resource cost as techniques advance.
> I won't get into whether the improvements we're seeing are marginal or not
But that's exactly the problem!
Right now, AI performs poorly enough that only a small fraction of users is willing to pay money for it, and (despite tech companies constantly shoving it in everyone's face) a large portion of the user base doesn't even want to adopt it for free.
You can't spend hundreds of billions of dollars on marginal improvements in the hope that it'll hopefully eventually become good enough for widespread adoption. Nobody is going to give OpenAI a trillion dollars to grow their user base 50x over the next 15 years. They are going to need to show significant improvements - and soon, or the bubble will pop.