The same guy who thinks AGI will eliminate "centaur coders" (I respectfully disagree) and possibly all white-collar work, is now concerned about the misuse of the same AI to make war? That's cute.
Literally just giving business away. This is not a cynical take, this is a realistic one.
This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".
They will simply go to another vendor... Anthropic is not THAT far ahead.
Also, the US’s enemies are not similarly restricted. /eyeroll
Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.
Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<
And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…
… since it all goes through their servers.
Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.