AI as we know it (GPT based LLM’s) have peaked. OpenAI noticed this sometime autumn last year when would-be GPT-5 was unimpressive despite the huge size. I still think ChatGPT 4.5 was GPT-5, just rebranded to set expectations.

Google Gemini 2.5 Pro was remarkably good and I’m not sure how they did it. It’s like an elite athlete doing a jump forward despite harsh competition. They probably have excellent training methodology and data quality.

DeepSeek made huge inroads in affordability…

But even with those, intelligence itself is seeing diminishing returns while training costs are not.

So OpenAI _needs_ to diversify - somehow. If they rely on intelligence alone, then they’re toast. So they can’t.

>I’m not sure how they did it

TPUs absolutely dumpster Nvidia cards, for the same reason that mining bitcoin is done with ASICs instead of cards.

So yeah, just more training, more data, and so on.

If Google wasn't so cloud focused, they could take over the AI Chip market lead from NVIDIA.

I tentatively agree that LLMs have reached somewhat of a ceiling at this stage and diversifying would make sense at this stage, in any other industry. But as others pointed out, OAI and others have attached their valuation directly to their definition of achieving “AGI”. Any pivot from that, if it were realistic in the coming years (my opinion: it isn’t), would be foolhardy and go against investors, so in turn, this is clearly admitting that even sama doesn’t see AGI as possible in the near term.