From a level headed outside perspective

It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.

From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.

I don’t know why more people don’t see this. It’s a matter of providing strong guarantees of reliability of the product. There is already mass surveillance. There is already life taking without proper oversight.

I think it's a bit more nuance than that. The government (however good or bad, just bear with me) already has oversight mechanisms and already has laws in place to prevent mass surveillance and policy about autonomous killing.

So the governments stance is "We already have laws and procedures in place, we don't want and can't have a CEO to also be part of those checks"

I don't think this outcome would have been any different under a normal blue government either. Definitely with less mud slinging though.

If you think a blue government would even consider threatening to falsely accuse a company of being a supply-chain threat in order to gain leverage in a contract negotiation, you're insane. There's nothing remotely normal about this, it's not something you see in any western democracy

>Definitely with less mud slinging though.

Government's free to not like the terms and go with another provider. That's whatever.

Government's not free to say, "We'll blow up your business with a false accusation if you don't give us the terms we want (and then use defence production act to commandeer the product anyway)". How much more blatantly authoritarian does it get than that?