Their assumption is that training is a fixed cost: you'll spend the same amount on training for 5 users as you will with 500 million users.

Spending hundreds of millions of dollars on training when you are two guys in a garage is quite significant, but the same amount is absolutely trivial if you are planet-scale.

The big question is: how will training cost develop? Best-case scenario is a one-and-done run. But we're now seeing an arms race between the various AI providers: worst-case scenario, can the market survive an exponential increase in training costs for sublinear improvements?

They just won’t train it. They have the choice.

Why do you think they will mindlessly train extremely complicated models if the numbers don’t make sense?

Because they are trying to capture the market, obviously.

Nobody is going to pay the same price for a significantly worse model. If your competitor brings out a better model at the same price point, you either a) drop your price to attract a new low-budget market, b) train a better model to retain the same high-budget market, or c) lose all your customers.

You have taken on a huge amount of VC money, and those investors aren't going to accept options A or C. What is left is option B: burn more money, build an even better model, and hope your finances last longer than the competition.

It's the classic VC-backed startup model: operate at a loss until you have killed the competition, then slowly increase prices as your customers are unable to switch to an alternative. It worked great for Uber & friends.