That's an argument for why openai and anthropic shouldn't be profitable, but this point is about how also they don't have customers using the models to generate a profit either. Things like cursor, for example. ETA: also note the recent MIT study that found that 95% of LLM pilots at for-profit companies were not producing returns.
This article is about the model providers' costs, not API users'. Cursor etc have to pay the marked-up inference costs, so it's not surprising they can't make a profit.
Yes, and the comment you first replied to was about the state/viability of the industry as a whole. If users can't make money from this "transformative technology", even when the provider is in the stage of burning money for the sake of growth, that sort of tells against it turning into a trillion dollar industry or whatever the hype claims.
The point is that the providers aren't burning money by subsidising inference costs. On the contrary, if this article is to believed they're charging healthy margins on it.
So there are two answers: for the model providers, it's because they're spending it all on training the next model. For the API users, it's because they're spending it all on expensive API usage.
The point is that the margins aren't "healthy" for the industry if their users can't be profitable, because if that's the case, the users will all go out of business, and the providers will stop being able to charge anybody anything, margin or no.
A factory can make cheap goods and not reach profitability for some time due to the large capital outlay in spinning up a factory and tooling. It is likely there are large capital costs associated with model training that are recouped over the lifetime of the model.
You are making a joke but reasonably speaking there are a ton of software companies where they kept reinvesting where they should have taken out profit, especially when they are peaking.
Sure. There are times to pull money out and there are times to reinvest money. It's not always clear what the better choice is, except in hindsight. But parent poster was treating it completely implausible that someone would choose to reinvest profits back into a profitable business.
Because they're spending it all on training the next model.
That's an argument for why openai and anthropic shouldn't be profitable, but this point is about how also they don't have customers using the models to generate a profit either. Things like cursor, for example. ETA: also note the recent MIT study that found that 95% of LLM pilots at for-profit companies were not producing returns.
This article is about the model providers' costs, not API users'. Cursor etc have to pay the marked-up inference costs, so it's not surprising they can't make a profit.
Yes, and the comment you first replied to was about the state/viability of the industry as a whole. If users can't make money from this "transformative technology", even when the provider is in the stage of burning money for the sake of growth, that sort of tells against it turning into a trillion dollar industry or whatever the hype claims.
The point is that the providers aren't burning money by subsidising inference costs. On the contrary, if this article is to believed they're charging healthy margins on it.
So there are two answers: for the model providers, it's because they're spending it all on training the next model. For the API users, it's because they're spending it all on expensive API usage.
The point is that the margins aren't "healthy" for the industry if their users can't be profitable, because if that's the case, the users will all go out of business, and the providers will stop being able to charge anybody anything, margin or no.
A factory can make cheap goods and not reach profitability for some time due to the large capital outlay in spinning up a factory and tooling. It is likely there are large capital costs associated with model training that are recouped over the lifetime of the model.
Training.
"Why would you reinvest profits back into a business that is extremely profitable, when you have the chance of pulling your money out?"
You are making a joke but reasonably speaking there are a ton of software companies where they kept reinvesting where they should have taken out profit, especially when they are peaking.
Sure. There are times to pull money out and there are times to reinvest money. It's not always clear what the better choice is, except in hindsight. But parent poster was treating it completely implausible that someone would choose to reinvest profits back into a profitable business.