It looks most likely like Anthropic wanted the ability to audit model usage, where as OpenAI was fine with just an agreement.

Hegseths tweet strongly alluded to this, and the general terms of the agreement are not public, just the hot button ones.

Am I wrong to think that such an agreement is basically meaningless? OpenAI gets to say there are limits, the government gets to do whatever it wants, and OpenAI will be very happy not to know about it.

Bingo. You don’t have to read much into this if you remember how the DoD uses the word trust. In their world, a "trusted" system is one that has the power to break your security if it goes wrong. So when they say "unrestricted use," the likely meaning isn’t just fewer guardrails it’s that the vendor doesn’t get to monitor or audit how the system is being used. In other words, the government isn’t handing a private company visibility into sensitive operations.