> more stringent safeguards than previous agreements, including Anthropic's.
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
Yep. It's the difference between "Don't do these things, regardless of what the law says." and "Do whatever you want, but please follow your own laws while you do it".
As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"
Sam Altman is basically the last person anyone should listen to.
"You could parachute [Sam Altman] into an island full of cannibals and come back in 5 years and he'd be the king."
--Paul Graham, 2008
Easy way to summarize it: "You're not allowed to do these things, except for all of the laws that allow you to do these things."
It’s a non-clause that is written to sound like they are doing something to prevent these uses when they aren’t. “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things. Plus the administration itself gets to decide if it meets legal use.
> “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things.
That's not quite right.
First off, I don't expect that "you used my service to commit a crime" is in and of itself enough to break a contract, so having your contract state that you're not allowed to use my service to commit a crime does give me tools to cut you off.
Second, I don't want the contract to say "if you're convicted of committing a crime using my service", I want it to say "if you do these specific things". This is for two reasons. First, because I don't want to depend on criminal prosecutors to act before I have standing. Second, because I want to only have to meet the balance of probabilities ("preponderance of evidence" if you're American) standard of evidence in civil court, rather than needing a conviction secured under "beyond a reasonable doubt" standard. IANAL, but I expect that having this "you can't do these illegal things except when they aren't illegal" language in the contract does put me in that position.
I don’t think the language does, or is intended to, give OpenAI any special standing in the courts.
They literally asked the DoD to continue as is.
Their is no safety enforcement standing created because their is no safety enforcement intended.
It is transparently written, as a completely reactive response to Anthropic’s stand, in an attempt to create a perception that they care. And reduce perceived contrast with Anthropic.
If they had any interest in safety or ethics, Anthropic’s stand just made that far easier than they could have imagined. Just join Anthropic and together set a new bar of expectations for the industry and public as a whole.
They could collaborate with Anthropic on a common expectation, if they have a different take on safety.
The upside safety culture impact of such collaboration by two competitive leaders in the industry would be felt globally. Going far beyond any current contracts.
But, no. Nothing.
Except the legalese and an attempt to misleadingly pass it off as “more stringent”. These are not the actions of anyone who cares at all about the obvious potential for governmental abuse, or creating any civil legal leverage for safe use.
> except for all of the laws that allow you to do these things.
It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.
Let me clear it up
The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.
Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.
I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)
Sure, but OpenAI is also being disingenuous here pretending they’re operating under the same principles Anthropic is. It’s not and the things they’re comfortable with doing Anthropic said they’re not
Brings to mind the infamous line from Nixon:
"When the president does it, that means it is not illegal".
This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.
If only Nixon had had the current supreme court, which actually agrees with him.
Nixon's issue wasn't a lack of support in the courts but in Congress[1]:
> On August 7, Nixon met in the Oval Office with Republican congressional leaders "to discuss the impeachment picture," and was told that his support in Congress had all but disappeared. They painted a gloomy picture for the president: he would face certain impeachment when the articles came up for vote in the full House, and in the Senate, there were not only enough votes to convict him, but no more than 15 or so senators were willing to vote for acquittal. That night, knowing his presidency was effectively over, Nixon finalized his decision to resign.
The contrast with how compliant the majorities in Congress are today to the whims of the White House cannot be overstated. The past decade has pretty much completely eliminated any semblance of a Republican Party that stood for anything other than the whims of Trump. Everyone either got on board or was exiled from power; the third highest member of House leadership got driven from Congress for taking a stand on the events of January 6, whereas the senator who in a debate in 2016 alleged that Trump's small hands implied a similar proportion for one of his less-visible body parts faded into the background for the next eight years and was rewarded with a prominent position in the cabinet this time around.
> https://en.wikipedia.org/wiki/Presidency_of_Richard_Nixon#Re...
Each of those clauses have a DoD policy carve out as an exception which says basically they can do whatever they want if they want to do it, but won’t be able to if they don’t want to do it.
This is the same government caught spying on its citizens by Snowden so I don’t trust them at all.
This implies that OpenAI must build and release and maintain a model without any safeguards, which is probably the big win and maybe something Anthropic never wants to do.
I don't think that is the correct conclusion.
But they won't be releasing it, they will be leasing it to DOJ and all their other customers will get the safeguarded model.
Sorry, meant to write DoW / department of defense, not DOJ.
So you want OpenAI to create “laws”?
I for one do not want ai labs to designate what is legally ok to do.
I much prefer the demos to take care of that.
Who said anything about OpenAI passing laws? (Where did that come from?)
Civilians are allowed to put conditions on working for, or supplying, the DoD or any governmental customer.
Tremendous good comes from those that are not willing to facilitate harms, simply because they are legal.
Equating legal with ethical or safe, makes no sense. [0]
[0] All of human history.
No, I want the government to prove that they understand the limitations of software that they're purchasing to use to kill people, and I want AI companies to be clear about what they think those limitations are. Just because it would be legal to use software for something doesn't mean it's capable of doing it safely, and there's absolutely no reason that the government is in a better position to judge that than the people who make it and literally have a financial incentive to lie about it. If they're that confident it wouldn't be safe to use it for that, maybe it's a good idea to consider that.
So we should just do whatever unethical thing we want, as long as it isn't strictly illegal/we won't get caught? Actually that does seem to be the Silicon Valley mantra. I dont know how people think this is going to play out well, particularly when the entities in question are often powerful enough to change the laws themselves.
I have never used AI to generate an answer for HN but just this once I thought it would be good to hit ChatGPT specifically and ask it for 'a list of times Sam Altman has gone against his word.' Here was its response:
Shift from Nonprofit Mission to For-Profit Orientation – OpenAI was founded as a nonprofit with a charter focused on “benefit to humanity,” but under Altman it created a capped-profit subsidiary, accepted large investments (e.g., from Microsoft), and critics (including Elon Musk in a 2024 lawsuit) argue this departed from that original mission. A federal judge allowed Musk’s claim that Altman and OpenAI broke promises about nonprofit governance to proceed to trial.
Nonprofit Control Reorganization Drama (2023) – In November 2023, the original nonprofit board cited a lack of transparency and confidence in Altman’s candor as a reason for firing him. He was reinstated days later after investor and employee pressure, highlighting internal conflict over governance and communication.
Dust-Up Over Military Usage Policies – OpenAI initially had explicit public policies restricting AI use in “military and warfare” contexts, but those clauses were reportedly removed quietly in 2024, allowing the company to pursue Department of Defense contracts — a turnaround from earlier language that appeared to preclude such use.
Statements on Pentagon Deal vs. Prior Positioning – In early 2026, Altman publicly said OpenAI shared safety “red lines” (e.g., prohibiting mass surveillance and autonomous weapons) similar to some competitors, but hours later OpenAI signed a deal to deploy its models on classified military networks, leading critics to argue this contradicts earlier positioning on limits for military use.
Regulation Stance Shifts in Congressional Testimony – Altman has advocated for strong regulation of AI in some public settings but in later congressional hearings opposed specific regulatory requirements (like mandatory pre-deployment vetting), aligning more with industry concerns about overregulation — a shift in tone compared with earlier support of regulatory frameworks.
I found this interesting. But the best approach is start with LLM, then check every point yourself, and summarize with real links. The moment we are ok with LLM output just once, it won't be just once, and things get too murky.
The purpose of the exercise was to see what OpenAI thinks of itself to a large degree. I hope nobody takes the answers at face value considering they clearly have a conflict of interest at their very core. It has turned into an interesting social experiment though. There is a very real instant negative reaction to saying 'an LLM generated this' no matter the context or intent.
And the powerful win even more
That seems exactly what it should be. The United States military should be able to do what the law allows. If we don't think they should be allowed to do something, we should pass laws. Not rely on the goodness of Sam Altman.
So don’t stand up for ethics and safety where there isn’t a law for it? Backwards day?
Nobody is prosecuting the DoD with non-laws here. But one company is using their legal right to refuse to facilitate great harms.
> Not rely on the goodness of Sam Altman.
(Who said anything about that? Where did that come from?)
Nobody wants to rely on Altman!
For anything. But it would be better if he would stand up for safety, instead of undermining it.
Your logic is backwards.
If we don’t want to rely entirely on a centralized government alone, increasingly interested in giving its leaders unfettered power, with all three branches increasingly willing to bend our laws and give itself impunity, then a widespread civilian culture of upholding safety by many and all actors is a necessity.
The need for the latter is always a necessity. But the risks of power consolidation, with the help of AI, are rising.