These numbers are off.

> $20/month ChatGPT Pro user: Heavy daily usage but token-limited

ChatGPT Pro is $200/month and Sam Altman already admitted that OpenAI is losing money from Pro subscriptions in January 2025:

"insane thing: we are currently losing money on openai pro subscriptions!

people use it much more than we expected."

- Sam Altman, January 6, 2025

https://xcancel.com/sama/status/1876104315296968813

I just straight up don't trust him

Saying that is the equivalent of him saying "our product is really valuable! use it!"

There's the usual issue of a CEO "talking their book" but there's also the fact that Sam has a rich, documented history of lying. That was the central issue of his firing. "Empire of AI" has a detailed account of this. He would outright tell board member A that "board member B said X", based on his knowledge of the social dynamics of the board he assumed that A and B would never talk. But they eventually figured it out, it unraveled, and they confronted him in a group. Specifically, when they confronted him about telling Ilya Sutskever that Tasha McCauley said Helen Toner should step off the board, McCauley said "I never said that" and Altman was at a loss for words for a minute before finally mumbling "Well, I thought you could have said that. I don't know."

That is my interpretation, that it's a marketing attempt. A form of "The value of our product is so good that it's losing us money. It's practically the Costco hotdog combo!".

That doesn't seem compatible with what he stated more recently:

> We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.

Source: https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...

His possible incentives and the fact OpenAI isn't a public company simply make it hard for us to gauge which of these statements is closer to the truth.

Does anybody really think in this current time that what a CEO says has anything to do with reality and not just with hyping up ala elon recipe

Specifically, a connected CEO in post-law America.

This sort of thing used to be called fraud, but there's zero chance of criminal prosecution.

Criminal persecution? This scheme has been perfected, like what do you want to persecute. Can you say with certainty that he means it's profitable overall? What if he means it's profitable right now today it is profitable, but not yesterday or in the last week. or what if he meant if you take the mean user its profitable? so much room for interpretation, that's why there is no risk for them

> That doesn't seem compatible with what he stated more recently:

Profitable on inference doesn't mean they aren't losing money on pro plans. What's not compatible?

The API requests are likely making more money.

Yes, API pricing is usage based, but ChatGPT Pro pricing is a flat rate for a time period.

The question is then whether SaaS companies paying for GPT API pricing are profitable if they charge their users a flat rate for a time period. If their users trigger inference too much, they would also lose money.

This can be true if you assume that there exists a high number of $20 subscribers who don't use the product that much, but $200 subscribers squeeze every last bit and then some more. The balance could be still positive, but if you look at the power users alone, they might cost more than they pay.

They might even have decided “hey, these power users are willing to try and tells us what LLMs are useful for, and are even willing to pay us for the opportunity!”

> If we didn't pay for training

it is comical that something like this was even uttered in the conversation. It really shows how disconnected the tech sector is from the real world.

Imagine Intel CEO saying "If we didn't have to pay for fabs, we'd be a very profitable company." Even in passing. He'd be ridiculed.

I'm not entirely sure the analogy is fair - Amazon for example was 'ridiculed' for being hugely unprofitable for the first decade, but had underlying profitability if you removed capex.

As a counterpoint, if OpenAI were actually profitable at this early stage that could be a bad financial decision - it might mean that they aren't investing enough in what is an incredibly fierce and capital-intensive market.

Also admitting it would make this business impossible if they had to respect copyright law, so the laws shall be adjusted so that it can be a business.

Doesn't he have an incentive to make it look like that, though? The way he phrased it, that they are losing money because people use it so much, makes it seem like Pro subscribers are some super power-users. As long as inference has a nonnegative, nonzero cost, then this case will lose money, so Sam isn't admitting that the business model is flawed or anything

https://news.ycombinator.com/item?id=45053741

> The most likely situation is a power law curve where the vast majority of users don't use it much at all and the top 10% of users account for 90% of the usage.

That'll be the Pro users. My wife uses her regular sub very lightly, most people will be like her...

Anyone paying attention should have zero trust in what Sam Altman says.

What do you think his strategy is? He has to make money at some point.

I don’t buy the logic that he will “scam” his investors and run away at some point.

He makes money by convincing people to buy OpenAI stock.

If OpenAI goes down tomorrow, he will be just fine. His incentive is to sell the stock, not actually build and run a profitable business.

Look at Adam Neumann as an example of how to lose billions of investor dollars and still walk out of the ensuing crash with over a billion.

https://en.wikipedia.org/wiki/Adam_Neumann

His strategy is to sell OpenAI stock like it was Bitcoin in 2020, and if for some reason the market decides that maybe a company that loses large amounts of cash isn't actually a good investment... he'll be fine, he's had plenty of time to turn some of his stock into money :)

Why not build a profitable business like Zucc, Bill gates, Jensen, Sergey etc? These people are way richer much more powerful.

I believe, but have no proof, that the answer is "because it's easier to sell stock in an unprofitable business than build a profitable one", although given the other comment, there's a good chance I'm wrong about this :)

Altman doesn't have any stock. He's playing a game at a level people caught up on "capitalism bad" can't even conceptualize.

I'm more "capitalism good" (8 billion people on earth, 7 billion can read, 5 billion have internet, and almost no one dies in childbirth anymore in rich countries, which is several billion people), but that is really interesting that he has no stock and just gets salary.

I guess if other people buying stock in your company is what enables you to have a super high salary (+ benefits like company plane, etc), you are still kinda selling stock though, and honestly, having considered the "start a random software company aligned with the present trend (so ~2015 DevOps/Cloud, 2020 cryptocurrency/blockchain, 2024 AI/ML), pay myself a million dollar a year salary and close shop after 5 years because 'no market lol'" route to riches myself, I still wouldn't consider Altman to be completely free of perverse incentives here :)

Still, very glad you pointed that out, thanks for sharing that information ^^

Again incorrect. He doesn’t have a super high salary.

Holy shit you are right. He owns no equity and just gets a salary. I have no idea about the game he’s playing.

> He has to make money at some point.

Yes, but two paths to doing that are to a) build a profitable company, and b) accumulate personal wealth and walk away from a non-profitable company.

I'm not saying OpenAI is unprofitable, but nor do I see Altman as the sort who'd rule out option b.

Trusting the man about costs would be even more misplaced than trusting an oil company's CEO about the environment.

That's interesting but it doesn't mean they're losing money on the $20/month users. The Pro plan selects for heavy-usage enthusiasts.

Losing money on o1-pro. That makes sense and also why they axed that entire class of models.

Every o1-pro and o1-preview inference was a normal inference times how many replica paths they made.

Apologies, should be Plus. I'll update the article later.