Did anyone seriously believe this was the AI's fault? The modern military use of LLMs is very clearly for the purpose of creating vaguely plausible targets while distancing any person from the decision to murder people. Surely if we cared at all about accomplishing a strategic goal we would have had a set of well documented targets ready to go. Instead the goal seems to be to drop as many bombs as possible, hope the computer's good enough that they mostly hit people who have relevance to things we don't like, and loudly proclaim that it's more important to kill people than it is to have any goal at all.