This is the exact attitude that lead to a chat bot being used to identify a school for girls as a valid target.

The chatbot cannot be held responsible.

Whoever is using chatbots for selecting targets is incompetent and should likely face war crime charges.

"that lead to a chat bot being used to identify a school for girls as a valid target"

Has it been stated authoritatively somewhere that this was an AI-driven mistake?

There are myrid ways that mistake could have been made that don't require AI. These kinds of mistakes were certainly made by all kinds of combatants in the pre-AI era.

Do you think anyone is ever going to say this under any circumstances? That Anthropic were right and they were proved right the very next day?

Yeah yeah, they probably had a human in the loop, that’s not really the point though.

Targeting and accuracy mistakes happen plenty in wars that aren't assisted by AI. I don't think it's fair to assume that AI had a hand in the bombing of the school without evidence.

What attitude exactly are you talking about? The one that says that if you’re going to morally sell out it would be better if you at least tried not to kill children?