I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
Someday there'll be a lawyer in court telling us how strong the AI evidence was because companies are spending billions of dollars on it.
Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.
And there needs to be an opposing lawyer ready to tear that argument to pieces.
You mean in the same fallacious sense of "you can tell cigarettes are good because so many people buy them"?
That sort of rhetoric works very well unfortunately.
In marketing, that's called "the bandwagon effect" and is one of the more powerful techniques for influencing people's thoughts and behaviors. Sadly, we are social animals and "social proof" is far more powerful than it should be.
Well now bots and AI get to amplify tge effect
"Everybody loves AI, 23.1 billion internet users couldn't possibly be wrong."
The article says the police later showed the student the photo that triggered the alert. He had a crumpled-up Doritos bag in his pocket. So there was no gun in the photo, just a pocket bulge that the AI thought was a gun... which sounds like a hallucination, not any actual reasonable pattern-matching going on.
But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.
Is use of force without justification automatically excessive force or is there a gray area?
See https://en.wikipedia.org/wiki/Graham_v._Connor