> The models as is are still hugely useful, even if no further training was done.

Exactly. The parent comment has an incorrect understanding of what unit economics means.

The cost of training is not a factor in the marginal cost of each inference or each new customer.

It’s unfortunate this comment thread is the highest upvoted right now when it’s based on a basic misunderstanding of unit economics.

The marginal cost is not the salient factor when the model has to be frequently retrained at great cost. Even if the marginal cost was driven to zero, would they profit?

But they don't have to be retained frequently at great cost. Right now they are retrained frequently because everyone is frequently coming out with new models and nobody wants to fall behind. But if investment for AI were to dry up everyone would stop throwing so much money at R&D, and if everyone else isn't investing in new models you don't have to either. The models are powerful as they are, most of the knowledge in them isn't going to rapidly obsolete, and where that is a concern you can paper over it with RAG or MCP servers. If everyone runs out of money for R&D at the same time we could easily cut back to a situation where we get an updated version of the same model every 3 years instead of a bigger/better model twice a year.

And whether companies can survive in that scenario depends almost entirely on their unit economics of inference, ignoring current R&D costs

Like we've seen with Karparthy & Murati starting their own labs, it's to be expected that over the next 5 years, hundreds of engineers & researchers at the bleeding edge will quit and start competing products. They'll reliably raise $1b to $5b in weeks, too. And it's logical: for an investor, a startup founded by a Tier 1 researcher will more reliably 10-100x your capital, vs. Anthropic & OpenAI that are already at >$250b+.

This talent diffusion guarantees that OpenAI and Anthropic will have to keep sinking in ever more money to stay at the bleeding edge, or upstarts like DeepSeek and incumbents like Meta will simply outspend you/hire away all the Tier 1 talent to upstage you.

The only companies that'll reliably print money off AI are TSMC and NVIDIA because they'll get paid either way. They're selling shovels and even if the gold rush ends up being a bust, they'll still do very well.

True. But at some point the fact that there are many many players in the market will start to diminish the valuation of each of those players, don’t you think? I wonder what that point would be.

> But if investment for AI were to dry up everyone would stop throwing so much money at R&D, and if everyone else isn't investing in new models you don't have to either

IF.

If you do stagnate for years someone will eventually decide to invest and beat you. Intel has proven so.

Yeah so? How does that change anything?

Unit economics are the salient factor of inference costs, which this article is about.

[deleted]

I upvoted it because it aligns most closely with my own perspective. I have a strong dislike for AI and everything associated with it, so my judgment is shaped by that bias. If a post sounds realistic or complex, I have no interest in examining its nuance. I am not concerned with practical reality and prefer to accept it without thinking, so I support ideas that match my personal viewpoint.

I don’t understand why people like you have to call this stuff out? Like most of HN thinks the way I do and that’s why the post was upvoted. Why be a contrarian? There’s really no point.

Is this written by a sarcastic AI?