What do you value a company at that has gotten to $14b in revenue in 3 years and has 60%+ margin on inference? Just out of curiosity.

60%+ margin on inference: source ?

+ r&d costs

Of course, if one does not "pay" for investment, benefits are easily made ..

I am struggling with this because I have an Anthropic offer vs another equivalent offer that is all cash.

But project out forwards.

- What happens when Google builds a similar model? Or even Meta, as far behind as they are? They have more than Anthropic in cash flow to pour into these models.

- What happens when OSS is "enough" for most cases? Why would anyone pay 60% margins on inference?

What is Anthropic's moat? The UX is nice, but it can be copied. And other companies will have similarly intelligent models eventually. Margins will then be a race to the bottom, and the real winners will be GPU infra.

If they outlast the competition it might be a really hard market to enter. Models are expensive to train, and they'll get outdated. You're on a time limit to make a profit off of it

Google and Meta might be the only real threats against this given how much cash they have and so far Meta is just flopping

If you have an offer, you can and should ask this question of whomever you're coordinating with. They will give you an honest answer.

I've been in this situation before. Anthropic has a stupid business model but the market can stay irrational longer than you can stay solvent. If you get in there you will be aligned with people who structurally do not lose.

Big picture, sure. We can talk about the millions that corporations will make and who's going to do what. But you're a person. $1 million in options is probably meaningful for you. Companies aren't IPOing, but the secret is that they're still paying employees cash for their options. SpaceX employees have had what's called a tender, which means they get to sell some of their hypothetical SpaceX options for cold hard cash in the bank that you can use to pay your mortgage. There's zero guarantee that Anthropic will do such a thing before the bubble bursts, but if they do, and you're there, who cares about a software company moat when you have enough money to buy a castle in Napa and pay to have a real actual moat with water in it and crocodiles, if that's what you want.

Others are made of different stuff, and are going to go right back to work, even though they could go off to a beach for the rest of forever somewhere.

> who cares about a software company moat when you have enough money to buy a castle in Napa and pay to have a real actual moat with water in it and crocodiles, if that's what you want.

Doesn't this require their private market valuations to go well into the trillions?

It would have to be a small castle.

Is their overall margin also about 60% too? Or something saner like 30%?

Their overall margin is negative.

No, it’s not. This is a dangerous perspective, usually held by engineers who think that accounting doesn’t matter and don’t understand it.

You MUST accrue the lifetime value of the assets against the capital expense (R&D in this case) to determine the answer to this question.

The company (until this announcement) had raised $17B and has a $14B revenue rate with 60% operating margin.

It is only negative on margin if you assume the prior 14B (e.g. Claude 4.6 plus whatever’s unreleased) will have no value in 24 months. In that case, well, they probably wasted money training.

If you think their growth rate will continue, then you must only believe the models have a useful 9 months or so life before they are break even.

Anthropic is, according to Dario, profitable on every model <<—- they have trained if you consider them individually. You would do best to think “will this pattern continue?”

What is the lifetime value of an individual pretraining run, and what is the cost to do it? Whether it is a net positive seems to still be an open question.

Actually there is a chart of answers to this question, because the frontier providers have been delivering new models for some time. The answer is that so far they have been net positive.

Sorry - if a model costs (say) 20B to train, lasts 12 months before it becomes obsolete, generates 2B/month revenue, but with 1B/month inference costs, then it has lost 8B.

Or are you suggesting that in fact each model comes out ahead over its lifespan, and all this extra cash is needed because the next model is so much more costly to train that it is sucking up all the profits from the current, but this is ok because revenue is expected to also scale?

I'm not suggesting, I'm saying that this is what has happened so far, full stop, based on multiple public statements from people like Dario.

Basically every model trained so far has made money for Anthropic and OpenAI. Well maybe not GPT4.5 - we liked you but we barely knew thee..

The cash spend is based on two beliefs a) this profitability will continue or improve, and b) scaling is real.

Therefore, rational actors are choosing to 2-10x their bets in sequence, seeing that the market keeps paying them more money for each step increase in quality, and believing that either lift off is possible or that the benefits from the next run will translate to increased real cash returns.

What's obscure to many is that these capital investments are happening time shifted from model income. Imagine a sequence of model training / deployments that started and finished sequenced: Pay $10m, make $40m. Pay $100m, make $400m. Pay $1bn, make $4bn. Pay $10bn, (we are here; expectation is: make $40bn).

If you did one of those per year, the company charts would look like: $30m in profits, $300m in profits, $3bn in profits. And in fact, if you do some sort of product-based accrual accounting, that's what you would see.

Pop quiz, if you spend in the first month your whole training budget, and the cycles all start in November, what would the cash basis statement look like for the same business model I just mentioned?

-$10m, -$60m, -$600m, $-6bn.. This is the same company with different accounting periods.

Back in reality, shortly into year 1, it was clear (or a hopeful dream) that the next step (-100 / +400) was likely, and so the company embarked on spending that money well ahead of the end of the revenue cycle for the first model. They then did it again and again. As a result naive journalists can convince engineers "they've never made money". Actually they've made money over and over and are making more and more money, and they are choosing to throw it all at the next rev.

Is it a good idea or not to do that is a question worth debating. But it's good to have a clear picture of the finances of these companies; it helps explain why they're getting the investment.