I don't think they've broken out O1 revenue, but it must be very small at the moment since it was only just introduced. Their O1-preview pricing doesn't seem to reflect the exponential compute cost, so perhaps it is not currently priced to be profitable. Overall, across all models and revenue streams, their revenue does exceed inference costs ($4B vs $2B), but they still are projected to lose $5B this year, $14B next year, and not make a profit until 2029 (and only then if they've increased revenue by 100x ...).

Training costs are killing them, and it's obviously not sustainable to keep spending more on research and training than the revenue generated. Training costs are expected to keep growing fast, while revenue per token in/out is plummeting - they need massive inference volume to turn this into a profitable business, and need to pray that this doesn't turn into a commodity business where they are not the low cost producer.

https://x.com/ayooshveda/status/1847352974831489321

https://x.com/Gloraaa_/status/1847872986260341224

The thing is that OpenAI can choose to spend less on training at any time.

We've seen this before, with for example Amazon where they made a deliberate effort to avoid profitability by spending as much as possible on infrastructure until the revenue became some much they couldn't spend it.

Being in a position where you are highly cash-flow positive and it's strategic investment that is the cost seems like a good position.

I don't know how you can compare Amazon vs OpenAI on the fundamentals of the two businesses. It's the difference in fundamentals that made Amazon a buy at absurd P/Es, as well as some degree of luck in AWS becoming so profitable, while OpenAI IMO seems much more of a dodgy value proposition.

Amazon were reinvesting and building scale, breadth and efficiency that has become an effective moat. How do you compete with Amazon Prime free delivery without your own delivery fleet, and how do you build that without the scale of operations?

OpenAI don't appear to have any moat, don't own their own datacenters, and the datacenters they are using are running on expensive NVIDIA chips. Compare to Google with their own datacenters and TPUs, Amazon with own datacenters and chips (Graviton), Meta with own datacenters (providing value to their core business) and chips - and giving away the product for free despite spending billions on it ... If this turns into the commodity business that it appears it may (all frontier models converging in performance), then OpenAI would seem to be in trouble.

Of course OpenAI could stop training at any time, but to extent that there is further performance to be had from further scaling and training, then they will be left behind by the likes of Meta who have a thriving core business to fund continued investment and are not dependent on revenue directly from AI.