Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of > our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our > strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Amazing / weird that this sounds like a lot of the stuff Amodei said Anthropic asked for
What this confirms is that it was never about Anthropic's terms. The administration has a bigger issue with Anthropic and that was just an excuse to ditch them. What exactly the issue was I'm not sure. Maybe OAI paid off the right people.
I don't think it does. It could just as well show openai accepted terms which are unacceptable for anthropic.
If they say "we define mass surveillance as flagging terrorist-looking people automatically with the Family Guy approach, but it's ok if a person types a name to look it up" then you can say "we agreed not to have mass surveillance by AI" or "that's still mass surveillance and we disagree with it".
Give Sam Altman's prior experience with worldcoin, I'm inclined to think he doesn't give a dime about mass surveillance.
Idk who would even want them as a client. They'll change course on you every 4 years at minimum potentially in massive ways that might force you to change your product putting all your other users at risk - and they might just do that to you legislatively, anyway!
What to watch: OpenAI CEO Sam Altman said in a memo to staff on Thursday night that the company will uphold the same red lines as Anthropic on surveillance and autonomous weapons, but still hopes to strike a deal with the Pentagon.
UPDATE: Sam Altman just agreed to be the DoD's replacement but says he made them agree to no mass surveillance/autonomous weapons (the same terms that were rejected from Anthropic? Everyone is skeptical in the replies on Twitter)
Reminds me of Google, Apple, Microsoft, and Facebook releasing similarly-worded statements denying that they would share information with NSA PRISM, despite the Snowden docs
Google is the Xerox Parc of AI, but unlike Xerox they decided to enter the ring with Gemini which is now quite good.
(For those who don't know the history, Xerox Parc and SRI before them invented the modern PC GUI and a lot of other modern PC stuff in the late 70s and early 80s, which Apple copied and then everyone else copied from Apple. Xerox paid a lot for R&D but never used it at all since it would cannibalize their copier business, classic innovators dilemma.)
Oh yeah he's refusing out of solidarity alright :eyeroll:
https://x.com/sama/status/2027578652477821175?s=20
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of > our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our > strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Amazing / weird that this sounds like a lot of the stuff Amodei said Anthropic asked for
What this confirms is that it was never about Anthropic's terms. The administration has a bigger issue with Anthropic and that was just an excuse to ditch them. What exactly the issue was I'm not sure. Maybe OAI paid off the right people.
I don't think it does. It could just as well show openai accepted terms which are unacceptable for anthropic.
If they say "we define mass surveillance as flagging terrorist-looking people automatically with the Family Guy approach, but it's ok if a person types a name to look it up" then you can say "we agreed not to have mass surveillance by AI" or "that's still mass surveillance and we disagree with it".
Give Sam Altman's prior experience with worldcoin, I'm inclined to think he doesn't give a dime about mass surveillance.
Addidionally: Why wasnt X.ai considered before Openai? also noted a lot of hate from Musk towards Anthropic lately.
Idk who would even want them as a client. They'll change course on you every 4 years at minimum potentially in massive ways that might force you to change your product putting all your other users at risk - and they might just do that to you legislatively, anyway!
They have unlimited, free money.
OpenAI courts Trump and their executives donate to MAGA foundations directly.
From Axios:
What to watch: OpenAI CEO Sam Altman said in a memo to staff on Thursday night that the company will uphold the same red lines as Anthropic on surveillance and autonomous weapons, but still hopes to strike a deal with the Pentagon.
https://www.axios.com/2026/02/27/anthropic-pentagon-supply-c...
UPDATE: Sam Altman just agreed to be the DoD's replacement but says he made them agree to no mass surveillance/autonomous weapons (the same terms that were rejected from Anthropic? Everyone is skeptical in the replies on Twitter)
https://xcancel.com/sama/status/2027578508042723599
Reminds me of Google, Apple, Microsoft, and Facebook releasing similarly-worded statements denying that they would share information with NSA PRISM, despite the Snowden docs
Sam Altman is a serial liar and grifter. He personally donated $1M to Trump. Don't believe a word in that press release.
The same Sam Altman that brought us Worldcoin + The Orb?
[flagged]
You mean, stole it from Google’s internal labs.
Google is the Xerox Parc of AI, but unlike Xerox they decided to enter the ring with Gemini which is now quite good.
(For those who don't know the history, Xerox Parc and SRI before them invented the modern PC GUI and a lot of other modern PC stuff in the late 70s and early 80s, which Apple copied and then everyone else copied from Apple. Xerox paid a lot for R&D but never used it at all since it would cannibalize their copier business, classic innovators dilemma.)
Meanwhile Google is murdering their websearch business with chatbots. Rightly so, as that business needs to die, or at least materially transform.
Lol remember BARD? or no?
Your delusion should be studied. Narcissists love people like you
Are you projecting? I'm giving credit where credit is due? What are you doing?
This sucks but is unfortunately true.