It’s generally useful to consider unit economy separate from whole company. If your unit economy is negative thing are very bleak. If it’s positive, your chance are going up by a lot - scaling the business amortizes fixed (non-unit) costs, such as admin and R&D, and slightly improves unit margins as well.
However this does not work as well if your fixed (non-unit) cost is growing exponentially. You can’t get out of this unless your user base grows exponentially or the customer value (and price) per user grows exponentially.
I think this is what Altman is saying - this is an unusual situation: unit economy is positive but fixed costs are exploding faster than economy if scale can absorb it.
You can say it’s splitting hair, but insightful perspective often requires teasing things apart.
It’s splitting a hair, but a pretty important hair. Does anyone think that models won’t need continuous retraining? Does anyone think models won’t continue to try to scale? Personally, I think we’re reaching diminishing returns with scaling, which is probably good because we’ve basically run out of content to train on, and so perhaps that does stop or at least slow down drastically. But I don’t see a scenario where constant retraining isn’t the norm, even if the rough amount of content we’re using for it grows only slightly.
Well, models are definitely good enough for some things in their current state, without needing to be retrained (computer translation for example was a solved problem with GPT3)
That’s true but irrelevant. No AI company is stopping training and further model development. OpenAI didn’t stop with GPT3, and they won’t stop with GPT5. No company, AI company or not, stops innovating in their market segment. You need to keep innovating to stay competitive.