The "model as company" metaphor makes no sense. It should actually be models are products, like a shoe. Nike spends money developing a shoe, then building it, then they sell it, and ideally those R&D costs are made up in shoe sales. But you still have to run the whole company outside of that.

Also, in Nike's case, as they grow they get better at making more shoes for cheaper. LLM model providers tell us that every new model (shoe) costs multiples more than the last one to develop. If they make 2x revenue on training, like he's said, to be profitable they have to either double prices or double users every year, or stop making new models.

But new models to date have cost more than the previous ones to create, often by an order of magnitude, so the shoe metaphor falls apart.

A better metaphor would be oil and gas production, where existing oil and gas fields are either already finished (i.e. model is no longer SOTA -- no longer making a return on investment) or currently producing (SOTA inference -- making a return on investment). The key similarity with AI is new oil and gas fields are increasingly expensive to bring online because they are harder to make economical than the first ones we stumbled across bubbling up in the desert, and that's even with technological innovation. That is to say, the low hanging fruit is long gone.

> new models to date have cost more than the previous ones to create

This largely was the case in software in the '80s-'10s (when versions largely disappeared) and still is the case in hardware. iPhone 17 will certainly cost far more to develop than did iPhone 10 or 5. iPhone 5 cost far more than 3G, etc.

I don't think it's the case if you take inflation into account.

You could see here: https://www.reddit.com/r/dataisbeautiful/comments/16dr1kb/oc...

new ones are generally cheaper if adjusted for inflation. This is a sale price, but assuming that margins stay the same it should reflect the manufacturing price. And from what I remember about apple earnings their margins increased over time, so it means the new phones are even cheaper. Which kind of makes sense.

I should have addressed this. This thread is about the capital costs of getting to the first sale, so that's model training for an LLM vs all the R&D in an iPhone.

Recent iPhones use Apple's own custom silicon for a number of components, and are generally vastly more complex. The estimates I have seen for iPhone 1 development range from $150 million to $2.5 billion. Even adjusting for inflation, a current iPhone generation costs more than the older versions.

And it absolutely makes sense for Apple to spend more in total to develop successive generations, because they have less overall product risk and larger scale to recoup.

exactly: it’s like making shoes if you’re really bad at making shoes :)

If you're going to use shoes as the metaphor, a model would be more like a shoe factory. A shoe would be a LLM answer, i.e. inference. In which case it totally makes sense to consider each factory as an autonomous economic unit, like a company.

Analogies don't prove anything, but they're still useful for suggesting possibilities for thinking about a problem.

If you don't like "model as company," how about "model as making a movie?" Any given movie could be profitable or not. It's not necessarily the case that movie budgets always get bigger or that an increased budget is what you need to attract an audience.

I believe better analogy is CPU development on next process node.

each node is much more expensive to design for, but when you finally have it you basically print money.

and of course you always have to develop next more powerful and power efficient CPU to keep competitive

>Also, in Nike's case, as they grow they get better at making more shoes for cheaper.

This is clearly the case for models as well. Training and serving inference for GPT4 level models is probably > 100x cheaper than they used to be. Nike has been making Jordan 1's for 40+ years! OpenAI would be incredibly profitable if they could live off the profit from improved inference efficiency on a GPT4 level model!

>>This is clearly the case ... probably

>>OpenAI would be incredibly profitable if they could live off the profit from improved inference efficiency on a GPT4 level model!

If gpt4 was basically free money at this point it's real weird that their first instinct was to cut it off after gpt5

> If gpt4 was basically free money at this point it's real weird that their first instinct was to cut it off after gpt5

People find the UX of choosing a model very confusing, the idea with 5 is that it would route things appropriately and so eliminate this confusion. That was the motivation for removing 4. But people were upset enough that they decided to bring it back for a while, at least.

They picked the worst possible time to make the change if money wasn’t involved (which is why I assumed GPT-5 must be massively cheaper to run). The backlash from being forced to use it cost a fair bit of the model’s reputation.

Yeah it didnt work out for them, for sure.

I think the idea here is that gpt-5-mini is the cheap gpt-4 quality model they want to serve and make money on.

It's model as a company because people are using the VC mentality, and also explaining competition.

Model as a product is the reality, but each model competes with previous models and is only successful if it's both more cost effective, and also more effective in general at its tasks. By the time you get to model Z, you'll never use model A for any task as the model lineage cannibalizes sales of itself.