I have to disagree. The biggest cost is still energy consumption, water and maintenance. Not to mention, to keep up with the rivals in incredibly high tempo (so offering billions like Meta recently). Then the cost of hardware that is equal to Nvidia skyrocketing shares :) No one should dare to talk about profit yet. Now is time to grab the market, invest a lot and work hard, hopping for a future profit. The equation is still work on progress.

The capital costs for the GPU are an order of magnitude larger than the energy consumption. It doesn't matter whether the GPUs are used for training or inference.

Back of the envelope: $25k GPU amortized over 5 years is $5k/year. A 500W GPU run at full power uses 4.5MWh; at $0.15/kWh the electricity costs $650/year.

The other operating costs you suggest have to be even smaller.

Is that not baked into the h100 rental costs?

It is.

> The biggest cost is still energy consumption, water and maintenance.

Are you saying that the operating costs for inference exceed the costs of training?

The global cost of inference in both Openai and Anthropic it exceed training cost for sure. The reason is simple: the inference cost grows with requests not with datasets. My math simplified by AI says: Suppose training GPT-like model costs

= $ 10,000,000 C T

=$10,000,000.

Each query costs

= $ 0.002 C I

=$0.002.

Break-even:

> 10,000,000 0.002 = 5,000,000,000

inferences N> 0.002 10,000,000

=5,000,000,000inferences

So after 5 billion queries, inference costs surpass the training cost.

Openai claims it has 100 million users x queries = I let you judge.

No. But training an LLM is certainly very very expensive and a gamble every time you do it. I think of it a bit like a pharmaceutical company doing vaccine research…