I admire Anthropic for sticking to their principles, even if it affects the bottom line. That’s the kind of company you want to work for.
I admire Anthropic for sticking to their principles, even if it affects the bottom line. That’s the kind of company you want to work for.
It's also a very clear differentiator for them relative to Google, Facebook, and OpenAI, all of whom are clearly varying degrees of willing to sell themselves out for evil purposes.
It will also cost openai dearly if they don't communicate clearly, because I for one will internally push to switch from openai (we are on azure actually) to anthropic. Besides that my private account also.
You can deploy Opus and Sonnet on Azure.
This will not cost OpenAI anything.
Thanks for being the voice of cynical inaction.
Is making effective weapons evil?
Given the history of US military adventurism and that we’re about to start another completely unjustified war of aggression against Iran, yes. Absolutely yes.
If it wasn't for US military power, Russia would have already overrun Ukraine. And if Iranian nuclear program is destroyed and the regime falls, it would be a good thing. For context, I'm from Czechia.
I'm from the US and strongly disagree that either of those things are a benefit to me as a US citizen. All it's doing is taking my money and putting me more at risk, and in the case of the attack on Iran: making me complicit in the most immoral acts imaginable.
As a US citizen you benefit from the status quo and global peace being maintained.
Whether it's justified or not depends on what you're trying to achieve. If your goal is to deny nukes from Iran, then the war is entirely justified.
The same admin that tore up the agreement for this we already had with Iran?
Not the same admin (that was Trump as the 45th), but I don't see the argument you're making.
A weapon is a tool.
Whether they are good or evil depends on the hands that hold it.
In good hands, weapons provide defense, deterrence, and protection.
In bad hands, weapons hurt the innocent, instill fear, and oppress.
The hands that wield them make all the difference.
What about all the weapons forbidden by the Geneva convention?
> What about all the weapons forbidden by the Geneva convention?
Some weapons are prohibited Geneva convention because they are designed to cause suffering or indiscriminately kill non-combatants:
"Weapons prohibited under the Geneva Convention and associated international humanitarian law (including the 1925 Protocol, CCW, and specific treaties) include chemical/biological agents (mustard gas, sarin), blinding lasers, expanding bullets, and non-detectable fragments. Also banned are anti-personnel landmines and cluster munitions.
Key prohibited and restricted weapons include:
Chemical and Biological Weapons: The 1925 Geneva Protocol and subsequent conventions (1972, 1993) banned the use, development, and stockpiling of asphyxiating, poisonous, or other gases, including nerve agents and biological weapons.
Blinding Laser Weapons: Specifically designed to cause permanent blindness (Protocol IV of the CCW).
Non-detectable Fragments: Weapons designed to injure by fragments not detectable in the human body by X-rays (Protocol I of the CCW).
Incendiary Weapons: Restrictions on using fire-based weapons (like flamethrowers) against civilian populations (Protocol III of the CCW).
Anti-personnel Landmines: Banned under the Ottawa Treaty (1997) due to risks to civilians.
Cluster Munitions: Prohibited due to their indiscriminate nature.
These treaties aim to protect civilians and combatants from unnecessary suffering and long-term danger."
Would "good hands" choose weapons that are designed to cause suffering or that kill indiscriminately?
No, they would not.
That’s a simplistic framing (obviously)
What does effective weapons mean in this particular instance?
Depends what the customers of anthropic and OpenAI think.
Yeah
"You need me on that wall!"
This guy sounds like he ordered a code red.
Yes?
Companies change (remember "don't be evil"?) but yeah for the Anthropic of today, respect.
The team that handles their PR has done an amazing job in the last 9 months
Hint: It's much easier to have good PR by being actually good. Though it does make people like this do the whole implication thing.
I saw this the other day:
> Costco is a really popular subject for business-success case studies but I feel like business guys kinda lose interest when the upshot of the study is like "just operate with scrupulous integrity in all facets and levels of your business for four decades" and not some easy-to-fix gimmick
https://bsky.app/profile/mtsw.bsky.social/post/3lnbrfrvmss26
I don't know, staff at my two Costcos feel much more disinterested and rude then I remember a decade ago. It used to feel fun but now it's miserable.
At peak times they run out of carts and tell the customers to go hunting in the lot for them, door greeters shouting at members across the floor, checkout queues stretch the length of the warehouse, they start half blocking the gas station entrance 30mins before close so trucks can't get in, so maybe they're turning those profit screws.
>It used to feel fun but now it's miserable.
It's not their job to entertain you.
'Delight the customer' is a basic tenet of business. A business that wants repeat customers, that is.
Ah, right, by being actually good, as in - being okay with mass surveillance as long as it isn't being done in the US, being okay with Claude assisting in killing people as long as it isn't fully autonomous, and being actively hostile to open-weight LLMs and open research on LLMs? This kind of "good"?
No, OP is right, their PR department is doing a great job.
Correct. Protect our citizens' rights, as we are the ones under the jurisdiction of our government. Yes, design competitive weapons systems that can stand up to the threats that adversary powers are creating, but do so while maintaining human control.
That kind of good.
It’s nice that Americans are being so open about how they feel about other countries these days.
"these days"? Too many countries/HNers are only just figuring out it's not fun being at the sharp-end of imperialism.
What part are you bothered about? The concept of nations?
Sibling comment summed it up pretty well; my country is considered an ally of yours, but even left leaning Americans seem to take it for granted that we deserve mass AI surveillance/blackmail/manipulation if there’s a chance it could benefit us citizens in the short term. I suppose we deserve it for being complicit in American crimes for so long
You're assuming things I didn't state. I don't particularly want mass AI surveillance at all, but considering how much more dangerous a government's mass spying is to its own citizens living in it 24/7, it's not unreasonable for that to be the focus.
> You're assuming things I didn't state. I don't particularly want mass AI surveillance at all
That's fair, sorry for that.
> considering how much more dangerous a government's mass spying is to its own citizens living in it 24/7, it's not unreasonable for that to be the focus
The US government is actively trying to influence politics in my country and spending huge amounts of money to do it. The US government is a much larger threat to us than our own government.
All of our tech is owned and operated by US companies, which means the US government has read/write access to all of our data. If we attempt to incentivize domestic software production (e.g. by taxing imported software, or by stipulating where our data can be stored and who can access it), the US government will destroy our economy. This has played out a few times recently.
I can't believe we were so foolish as to let this situation grow. Its going to be a painful few decades.
How have they been hostile to open weight models and research? Just because they don't release models themselves?
Note that they are still releasing interesting research
Why? What has their PR department done? Most people are quite critical of a lot of their messaging, it's their actions that seem worth encouraging
[flagged]
It's funny, because even if they walk it back, they still would come out ahead in PR versus if they just rolled over. Because at that point, it would look like a hostage victim reading a statement that they are being treated well by their captors in front of a camera.
The admin is clearly running out of steam yet you expect them to be able to get what they want next week after failing this week?
Ive been hearing this since 2016. Any day now.
Do you think that bad things happening is just hilarious in general? Do you like to see good behavior punished? I'm really trying to understand what you get out of making this comment. Also what happens when ... This doesn't happen? You just polluted the epistemic commons a bit more with some cynical bullshit sans consequence? Enough. I think it's time to start calling this garbage out when I see it.
Two things can be true at the same time. It can notionally be a “good” decision and also a straightforward act of Anthropic continuing their PR that they’re some sort of benevolent entity despite continuing to pursue a typical corporate capitalistic structure. It is what it is. The game is the game. But I’m not going to sit there and pretend their virtues are as pure of snow. I’m sorry that’s upset you.
I'm signing up for their $200/year plan to reward them for standing up to this regime.
This whole saga is extremely depressing and dystopic.
Anthropic is holding firm on incredibly weak red lines. No mass surveillance for Americans, ok for everyone else, and ok to automatic war machines, just not fully unmanned until they can guarantee a certain quality.
This should be a laughably spineless position. But under this administration it is taken as an affront to the president and results in the government lashing out.
We live in a timeline where you don’t have to have strong morals to be crushed. If you have any morals, you will be crushed.
They have earned my business, for now.
If you're a billionaire there's no risk to "sticking to principles", so there's nothing to admire. Also that's not what they're doing. These are calculated moves in a negotiation and the trump regime only has 3 years left. Even a CEO can think 4 years ahead.
It's probably in Anthropic's interest to throw grok to these clowns and watch them fail to build anything with it for 3 years.
i disagree. 3 years is an insanely long time in the AI space. The entire industry pretty much didn't even exist three years ago! Or at least not within 4 orders of magnitude.
Also, every other company has bent the knee and kissed the ring. And the trump admin will absolutely do everything they can to not appear weak and harm Anthropic. If it was so easy to act principled, don't you think other companies would've refused too? Eg Apple
And there is real harm here. You're reading about it - they get labeled a supply chain risk. This is negative and very tangible
Considering how many bootlicking billionaires I see these days, it is still a bit surprising.
[flagged]
There is already genai.mil: https://www.war.gov/News/Releases/Release/Article/4354916/
why does it need to be a completely different, trained model? AWS doesn't provide unique technologies in their goverment cloud, beyond isolation and firewalled access; Anthropic can do the same thing. Probably need to cough up enough to register a new domain name!
I can think of two reasons. One, to have the plausible deniability with the necessary future statement "Claude is not used by the DoD/DoW to conduct domestic mass surveillance or autonomous killing"; by having the model be properly a different from the one used by the public, they can wrangle over the language with technicalities and still avoid outright lying. (With their IPO in sight, let's keep in mind that everything is securities fraud.)
And two, I suspect that some of the guardrails have been "baked in" to Anthropic's model. Much in the same way as the Chinese open-weight models have a strong bias against expressing positive sentiments about Tiananmen Square, Tank Man or Winnie the Pooh, the "Standard Claude" would likely have the fundamental product biases trained into it.
Taken together it would therefore be both politically and financially sensible for Anthropic to create a separate, unrestricted[tm] almost-Claude for the morally unconstrained military / intelligence purposes.
Exactly.
> 83 people in total killed in US attack to abduct President Nicolas Maduro
Blood is on their hands already
So much left unsaid. So much implied. Let’s make it explicit and talk about it. Here are some follow questions that reasonable people will ask:
What was Anthropic’s role in the Maduro operation? (Or we can call it state-sponsored kidnapping.) Who knew what and when? Did A\ find itself in a position where it contradicted its core principles?
More broadly, how does moral culpability work in complex situations like this?
How much moral culpability gets attributed to a helicopter manufacturer used in the Maduro operation? (Assuming one was; you can see my meaning I hope.)
P.S. Traditional programming is easy in comparison to morality.