When models get cheaper to run for OpenAI, they also get cheaper for everyone else. It gets commoditized. AI might be able to do more, but most people aren’t going to pay for a thing they could get for free. See the many models on Huggingface as examples of that.

And as the number of things AI is “good enough” at increases, the list of things on the frontier that people will want to pay OpenAI for shrinks. Even if OpenAI can consistently churn out PhD level math, most companies don’t care about that.

So a necessary (but not sufficient) condition for the math to work out is that frontier tasks still exist and are profitable. This is why CEOs keep hyping up AGI. But what they really want is for developers to keep paying to get AI to center a div.

> get cheaper to run

Irrelevant. The model is the moat

> most companies don’t care about that.

Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.

> center a div

For sure a common use case, but is bot what the CEO is concerned about with AI.

> Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.

For some tasks that matters. But for a lot of tasks, "good enough but cheaper" will win out.

I'm sure there will be a market for whichever company has the best model, but just like most companies don't hire many PhD's, most companies won't feel a need for the highest end models either, above a certain level.

E.g. with the release of Sonnet 4.6, I switched a lot of my processes from Opus to Sonnet, because Sonnet 4.6 is good enough, and it means I can do more for less.

But I'm also experimenting with Kimi, Qwen, Deepseek, and others for a number of tasks, including fine-grained switching and interleaving. E.g. have a cheap but dumb model filter data or take over when a sub-task is simple enough, in order to have the smart model do less, for example.

[deleted]