Of course the US is going to do this and of course its in Anthropics best interest to comply. Right now China is flooding HuggingFace with models that will inevitably have this capability. Right now there are hundreds of models being hosted that have been deliberately processed to remove refusals and their safety training. Everyone who keeps up with this knows about it. HF knows about it. And it is pretty obvious that those open weight models will be deployed in intelligence and defense. It is certain that not just China, but many nations around the world with the capital to host a few powerful servers to run the top open weight models are going to use them for that capability.
The narrative on social media, this site included, is to portray the closed western labs as the bad guys and the less capable labs releasing their distilled open weight models to the world as the good guys.
Right now a kid can go download an Abliterated version of a capable open weight model and they can go wild with it.
But let's worry about what the US DoD is doing or what the western AI companies absolutely dominating the market are doing because that's what drives engagement and clicks.
> But let's worry about what the US DoD is doing
They want Anthropic to enabling mass surveillance and autonomous attack systems with no human in the loop.
Hardly compares to a kid downloading a model to experiment with.
*To improve* mass surveillance and autonomous attack systems with no human in the loop. China and USA already had those kind of systems way before AI.
China is certainly lax, but the US doesn't allow autonomous ATTACK systems. For Attack systems it is always required that a human makes the judgement call when to attack.
Or least it didn't until the current regime.
The US does have autonomous defensive systems.
I could be wrong though, can you post your evidence? The closest I could find is loitering munitions.
Even so, a company shouldn't be forced to go against its ethics if those ethics help humans.
Drone pilots don't get any info about their target, certainly not enough to make a judgement call. If they object (or burn out) someone else is put in the chair.
People are conscripted, they put on the uniform and become legitimate targets? It might as well be a robot doing the shooting. Same difference.
It's not the same.
The pilot becomes responsible for those outcomes. For example indiscriminately killing civilians for example is a war crime. Its easier to get an AI to commit war crimes than humans.
Perhaps but if the difference is significant I don't know. Everything changes then we try stretch rhetoric from stabbing someone with a sword to hypersonic missiles? We might hold the pilot responsible if they erase a building but I'm far less comfortable blaming them. We know the targets are actually picked by computers using metadata. The difference gets increasingly vague.
> Right now a kid can go download an Abliterated version of a capable open weight model and they can go wild with it.
Is the reason to ban or block free open weight models that you're worried what kids will do with them?
I'd imagine the economic case to be made is that the Western AI companies will ultimately not be able to compete with free open weight models. Additionally, open weight models will help to spread the economic gains by not letting a few monopolies capture them behind regulatory red tape.
Finally, I'd say the geopolitics angle of why open weight models are better is that if the West controls the open source software that will power it will be able to reap the benefits that soft power brings with it.