It’s a $200M contract. That’s not nothing but it’s definitely not such a huge sum for these companies at their scale when they’re spending billions on infrastructure.

I’m sure anthropic has signed up more revenue this week in response to this debacle to cover it. Where they’re actually screwed is if the gov follows through and declare anthropic a supply chain risk.

That is with the Pentagon directly only. Now they will lose much more because no defense contractor, subcontractor and so on can use them for anything defense related (even if they use the model to invent a new type of screw, if that screw is going to be used in anything military).

So yeah, they bet a whole lot on “look at us, we have morals”.

their revenue went up 4 billion in the week since this story started.

[deleted]

There's no legal basis for blocking defense contractors from using them. Trump's claiming he can do so, but the law doesn't back him up. He'll lose in any fair court, or any corrupt court that values billionaire interests over virtue signaling to the orange one (like the Supreme Court).

Also, they got a huge PR win, and jumped to #1 on the Apple App Store. Consumer market share is going to decide which of the AI companies is the market leader, not fickle government contracts.

Consumer market share? Absolutely not.

If you look at what generates cash, it's corp to corp. That's across most industries. While there are markets that are consumer mostly, LLMs have immense and enormous business facing revenue potential. The consumer market is a gnat in comparison.

There are always Executive Orders that can enforce that. It is not like in the movies where they will sort stuff out in 2 weeks in a single trial. It is going to take years, and we'll see if Anthropic survives that.

I'm guessing they believe they will be around longer than this administration.

It's not "just" a $200m contract, it's the start of a lucrative relationship

1. Stargate seemed to require a dedicated press conference by the President to achieve funding targets. Why risk that level of politicization if it didn't?

2. Greg Brockman donated $25mil to Trump MAGA Super PAC last year. Why risk so much political backlash for a low leverage return of $200m on $25m spent?

3. During WW2, military spend shot from 2% to 40% of GDP. The administration is requesting $1.5T military budget for FY2027, up from $0.8T for FY2025. They have made clear in the past 2 months that they plan to use it and are not stopping anytime soon

If you believe "software eats the world" it is reasonable to expect the share of total military spend to be captured by software companies to increase dramatically over the next decade. $100B (10% of capture) is a reasonable possibility for domestic military AI TAM in FY2027 if the spending increase is approved (so far, Republicans have not broken rank with the administration on any meaningful policy)

If US military actions continue to accelerate, other countries will also ratchet up military spend - largely on nuclear arsenals and AI drones (France already announced increase of their arsenal). This further increases the addressable TAM

Given the competition and lack of moat in the consumer/enterprise markets, I am not sure that there is a viable path for OpenAI to cover it's losses and fund it's infrastructure ambitions without becoming the preferred AI vendor for a rapidly increasing military budget. The devices bet seems to be the most practical alternative, but there is far more competition both domestically (Apple, Google, Motorola) and globally (Xiaomi, Samsung, Huawei) than there is for military AI

Having run an unprofitable P&L for a decade, I can confidently state that a healthy balance sheet is the only way to maintain and defend one's core values and principles. As the "alignment" folks on the AI industry are likely to learn - the road to hell (aka a heavily militarized world) is oft paved with the best intentions

First, I have to say I loved your thoughtful & detailed comment. You have clearly considered this from the financial side; let me add some color from the perspective of someone working with frontier researchers.

> As the "alignment" folks on the AI industry are likely to learn

I will push back here. Dario & co are not starry-eyed naive idealists as implied. This is a calculated decision to maximize their goal (safe AGI/ASI.)

You have the right philosophy on the balance sheet side of things, but what you're missing is that researchers are more valuable than any military spend or any datacenter.

It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.

There is no substitute for sheer IQ:

- You can't buy it (god knows Zuck has tried, and failed to earn their respect).

- You can't build it (yet.)

- And collaboration amongst less intelligent people does not reliably achieve the requisite "Eureka" realizations.

Had Anthropic gone forth with the DoD contract, they would have lost this top crowd, crippling the firm. On the other hand, by rejecting the contract, Anthropic's recruiting just got much easier (and OAI's much harder).

Generally, the defense crowd have a somewhat inflated sense of self worth. Yes, there's a lot of money, but very few highly intelligent people want to work for them. (Almost no top talent wants to work for Palantir, despite the pay.) So, naturally:

- If OpenAI becomes a glorified military contractor, they will bleed talent.

- Top talent's low trust in the government means Manhattan Project-style collaborations are dead in the water.

As such, AGI will likely emerge from a private enterprise effort that is not heavily militarized.

Finally, the Anthropic restrictions will last, what, 2.5 more years? They are being locked out of a narrow subset of usecases (DoD contract work only - vendors can still use it for all other work - Hegseth's reading of SCR is incorrect) and have farmed massive reputation gains for both top talent and the next administration.

This is an interesting perspective. What happens if there is a large global war? Do researchers who were previously against working with the DoD end up flipping out of duty? Does the war budget go up? Does the DoD decide to lift any ban on Anthropic for the sake of getting the best model and does Anthropic warm its stance on not working with autonomous weapons systems?

I don’t know the answers to these questions, but if the answer is “yes” to at least 1 or 2, then I think the equation flips quite a bit. This is what I’m seeing in the world right now, and it’s disconcerting:

1. Ukraine and Russia have been in a skirmish that has been drawn out much longer than I would guess most people would have guessed. This has created a divide in political allegiance within the United States and Europe.

2. We captured the leader of Venezuela. Cuba is now scared they are next.

3. We just bombed Iran and killed their supreme leader.

4. China and the US are, of course, in a massive economic race for world power supremacy. The tensions have been steadily rising, and they are now feeling the pressure of oil exports from Iran grinding to a halt.

5. The past couple days Macron has been trying to quell tension between Israel and Lebanon.

I really do not hope we are not headed into war. I hope the fact that we all have nukes and rely on each others’ supply chains deters one. But man does it feel like the odds are increasing in favor of one, and man does that seem to throw a wrench in this whole thing with Anthropic vs. OpenAI.

I think the point is that there's potentially a lot more than $200m in defense dollars at stake here, in the future.