What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?
> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?
Yes. Absolutely.
And what? Get nationalized? Get labelled as terrorists?
The US system doesn't empower a company to say no. It should though.
Yes. Force them to do it the hard way and fight through it. Don’t abdicate in advance
You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.
You own nothing but your opinion. (No offense to personal property aficionados)
I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)
That is an interesting question, very far from my daily concern and brings dilemmas when I think about it. My response would probably be "I don’t know".
However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.
It is of course possible to argue that the reason there is no ongoing invasion of the USA is because of our continued investment in technology for killing people
Thats the same type of thinking conspiracy theorists have, the type you can never disprove.
I am 100% against militarism and wished we didn't need any of this, but the power balance between Russia and Ukraine or even Israel and the Palestinians seem to corroborate the thesis... There likely would be no Ukraine war today if Ukraine hadn't voluntarily given up its nukes three decades ago (unproven thesis). There was one as Russia thought it could win. The ongoing (after the "peace fire") Israeli occupation and attacks of the remnants of Palestinian territory show the same. If you are the weaker party and there is a stronger party that wants what you have (or plain wants to eradicate you) then they'll do so..
[dead]
> I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)
There are a lot of well meaning people that are very anti-weapon or anti-violence under any circumstances. The problem is that when those people actually need those weapons and that violence, they are so inadequate at it that they become a liability to themselves and others.
I'm not saying I have or know of a solution, but I remember the old saying (paraphrasing) that it's better to be a warrior working a farm than a farmer working a war.
Sure, if that's what it takes to do the right thing.
Literally Rule 1 On Fighting Tyranny:
> 1. Do not obey in advance.
> Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.
https://scholars.org/contribution/twenty-lessons-fighting-ty...
Yes, that's exactly what I want them to say.
No, you don't. If they develop the safest, most cost-effective version of the technology that the military WILL inevitably use from some company, Anthropic or otherwise, then that's the version of this tech you want them using.
The safest, most cost effective version will not help you when you are their designated target for disagreeing with the regime.
After all, the regime already says such domestic dissenters are terrorists, and have, on multiple recent occasions, justified the execution of domestic dissenters based on that.
The safest version will still be better overall regardless, by definition. It is also a better future for most if it is inevitable that the war department is going to use a less safe alternative if they can't use the safer one.
The safest version will be the one most effective at killing dissenters without killing regime personnel. So yes, it will be better, for the people controlling the killbots, not for their victims.
>Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?
Yes. Yes, that's precisely what we want.
I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.
There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)
You can take issue with that argument if you want but it’s unconvincing not to address it.
There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different.
If my grandma had wheels she'd be a bicycle
The guy who authorized the Manhattan project:
- had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment
- threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda
- ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini
- interned 120k people without due process, on the basis of ethnicity
- turned a national party into a personal patronage system
- threatened to override the legislature if it didn’t start passing laws he liked
Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.
It is a huge stretch to label a popular and democratically elected amd reelected Presidentnand Congress "authoritarian".
Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true.
That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason.
I agree that the risk of an accidental strike was a huge problem with the theory of nuclear deterrence, but the question is: compared to what? In expectation or even in a 1st percentile scenario, was MAD worse than a world where the USSR is a unilateral nuclear power? For that matter, what would it have taken to get a stronger SALT treaty sooner?
I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?
I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.
> Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?
China considers all lethal autonomous weapons "unacceptable", calling all countries to ban it. Countries like the US and India refuse to back such proposals. See China's official stands on this matter below.
https://documents.unoda.org/wp-content/uploads/2022/07/Worki...
I totally understand that you got brainwashed by the media, but hey you appearantly have internet access, why can't you just do a little bit research of your own before posting nonsense using imagination as your source of information?
China does not consider all lethal autonomous weapons system "unacceptable" even for use, let alone to develop, and the document you linked explains this very clearly. Here's what the document actually says, formatted slightly for clarity:
``` Basic characteristics of Unacceptable Autonomous Weapons Systems should include but not limited to the following:
- Firstly, lethality, meaning sufficient lethal payload (charge) and means.
- Secondly, autonomy, meaning absence of human intervention and control during the entire process of executing a task.
- Thirdly, impossibility for termination, meaning that once started, there is no way to terminate the operation.
- Fourthly, indiscriminate killing, meaning that the device will execute the mission of killing and maiming regardless of conditions, scenarios and targets.
- Fifthly, evolution, meaning that through interaction with the environment, the device can learn autonomously, expand its functions and capabilities in a degree exceeding human expectations.
Autonomous weapons systems with all of the five characteristics clearly have anti-human characteristics and significant humanitarian risks, and the international community could consider following the example of the Protocol on Blinding Laser Weapons and work to reach a legal instrument to prohibit such weapons systems. ```
Charitably, you might say that China is worried about a nightmare scenario. Less charitably, you might say that the definition of an unacceptable weapon system is so tight that it does not describe anything that anyone would ever build, or would want to build. This posture would allow China to adopt the international posture of seeming to oppose autonomous weapons without actually de facto constraining themselves at all.
This, by contrast, is what China considers acceptable:
``` Acceptable Autonomous Weapons Systems could have a high degree of autonomy, but are always under human control. It means they can be used in a secure, credible, reliable and manageable manner, can be suspended by human beings at any time and comply with basic principles of international humanitarian law in military operations, such as distinction, proportionality and precaution. ```
So as long as the system has a killswitch (something that afaik absolutely no one is proposing to dispense with?), it's Acceptable.
Meanwhile, it would certainly seem that China's defense research universities are interested in developing this tech: https://thediplomat.com/2026/02/machines-in-the-alleyways-ch....
So, I did a bit of research with my internet access-- how do my findings square with your impressions?
I don't know where you got this info about China, but it's wrong. AI in a warfare context is the new Manhattan Project.
All the world powers are in a race to it.
https://cset.georgetown.edu/article/china-trains-ai-controll...
https://thediplomat.com/2026/02/machines-in-the-alleyways-ch...
https://www.brookings.edu/articles/ai-weapons-in-chinas-mili...
https://cset.georgetown.edu/article/how-china-is-using-ai-fo...
So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?)
No, that's precisely why I'm opposed to it happening here, and why I prefer the idea of Anthropic limiting their contribution to creating such a scenario.
If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?
If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
> If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?
No
> If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.
Anthropic was already giving them that. It’s not like they need domestic mass surveillance or autonomous kill bots to have a portfolio of possible winners. If the goal is to keep the US competitive in AI, this whole process was actively unhelpful. Honestly more helpful for our adversaries than for us.
Why are you assuming that people in China, Iran, Russia etc are not having these exact same conversations, and perhaps a powerful example from the USA, along with some belief that the USA will not be able to easily get this technology, help inspire them to abstain as well?
However horrific the regimes in these countries are, the people behind the technology there are just as likely to be intelligent and moral human beings as the people in the USA and Europe working on these are.
With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though.
Its not just the atomic bomb im talking the usa had the best production of fighter jets, bombers, all kinds of communication technology, deciphering technology all the ammunition, all of those together beat the Nazis and they were trying their best to develop better and more advanced technologies than usa!
Did WMDs have a meaningful effect on stopping the Nazis? I thought the bomb wasn't dropped until after they surrendered.
The only two atomic weapons ever deployed weren't even targeting Nazi Germany, but Japan. Dark but true: they were both deliberately and knowingly targeted at civilian populations.
And inflicted less damage than the fire bombing campaigns on civ pop centers that were carried out along side the A-bombs.
The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power.
"Needed to win the war," no. The US could've continued to firebomb and then follow with a land invasion, which would've killed both more Japanese and more Allies.
Was it the best path to end the war? Certainly.
The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.
Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?
> I absolutely don’t want tech companies to use the money I pay them to harm people.
Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.
I am unaware of any tech company that directly does physical warfare on the battlefield against humans.
Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines
> making AI killing machines
What’s an example of a company that’s making killing machines that a typical consumer or someone HN might be buying product or services from?
The easy answer is Westinghouse (look for the youtube short about "things that spin"...)
As far as I know, Apple does not supply their chips for military use.
Time to stop paying your taxes. :P
Because it's painfully short-sighted, or maliciously ignorant.
No, it’s just that I don’t want the money I spend to have blood on it. Trivially simple.
Also trivially naive and useless. Evil exists. Conflicts will happen. If evil was at your doorstep, threatening people you love, you absolutely DO want money you spend to have blood on it, if it means keeping yourself and your loved ones safe. Trivially simple.
This line of thinking is entirely foreign (and vaguely repulsive) to me. Can I imagine a situation where I'm forced to cause the death of someone in order to defend those close to me? Vaguely. But I would be racked with guilt for the rest of my life.
In any case, AI drones will largely be used for "defense" in the euphemistic sense.
What if I told you that it's way too late for that?
Well, we have to try to live as virtuously as we can using the means and remedies available to us.