"the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target."
This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.
It's definitely been reported before that Claude was used for Iran attacks, at the beginning of March or earlier:
https://www.theguardian.com/technology/2026/mar/01/claude-an...
Edit: Also, https://www.washingtonpost.com/technology/2026/03/04/anthrop...
"The U.S. used Anthropic's Claude to support Operation Epic Fury against Iran yesterday, sources familiar with the Pentagon's operations tell Axios."
OK. The US probably also used telephones and Diet Coke.
Nothing cited said that Claude was selecting targets or informing target selection.
https://www.washingtonpost.com/technology/2026/03/04/anthrop...
I have heard the claim everywhere.
there is a lot of confusion about all this stuff
you, today, can use Claude in Amazon Bedrock, and the way that works is, if you want it to be this way: the piece of code and model weights and whatever other artifacts are involved, they are run on Bedrock. Bedrock is not a facade against Claude's token-based-billing RESTful API, where Anthropic runs its own stuff. In the strictest sense, Bedrock can be used as a facade over lower level Amazon services that obey non-engineering, real world concerns like geographic boundaries / physical boundaries, like which physical data center hardware is connected by what where / jurisdictional boundaries, whatever. It's multi-tenancy in the sense that Amazon has multiple customers, but it's not multi-tenancy in the sense that, because you want to pay for these requirements, Amazon has sorted out how to run the Claude model weights, as though it were an open-weights model you downloaded off Hugging Face, without giving you the weights, but letting you satisfy all these other IP and jurisdictional and non-technical requirements that you are willing to pay for, in a way that Anthropic has also agreed.
This is what the dispute with the Pentagon is about, and what people mean when they say Claude is used in government (it is used in Elsa for the FDA for example too). Anthropic doesn't have telemetry, like the prompts, in this agreement, so they have the contract that says what you can and cannot use the model for, but they cannot prove how you use the model, which of course they can if you used their RESTful API service. They can't "just" paraphrase your user data and train on it, like they do on the RESTful API service. There are reasons people want this arrangement ($$$).
The vendor (Palantir) can use, whatever model it wants right? It chose Claude via "Bedrock." I don't know if they use Claude via Bedrock. Ask them. But that's what they are essentially saying, that's what this is about. Palantir could use Qwen3 and run it on datacenter hardware. Do you understand? It matters, but it also doesn't matter.
It's a bunch of red herrings in my opinion, and this sort of stuff being a red herring is what the article is mostly about.