“We’re not profitable even if we discount training costs.”

and

“Inference revenue significantly exceeds inference costs.”

are not incompatible statements.

So maybe only the first part of Sam’s comment was correct.

I should have provided a direct quote:

> At first, he answered, no, we would be profitable, if not for training new models. Essentially, if you take away all the stuff, all the money we’re spending on building new models and just look at the cost of serving the existing models, we are profitable on that basis. And then he looked at Brad Lightcap, who is the COO. And he sort of said, right? And Brad kind of squirmed in his seat a little bit and was like, well — He’s like, we’re pretty close.

I don't think you can square that with what he stated in the Axios article:

> "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."

Except, if the NYT dinner happened after the Axios article interview, which is possible given when each was published, and he was actually literally unaware of the company's financials.

Personally: it feels like it should reflect very poorly on OpenAI that their CEO has been, charitably, entirely unaware how close they are to profitability (and uncharitably, that he actively lies about it). But I'm not sure if the broader news cycle caught it; the only place I've heard this mentioned is literally this NYT Hard Fork podcast which is hosted by the people who were at the dinner.

I imagine that one of the largest costs for openai is the wages they pay.