"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
Charge the superintendent with swatting.
Decision-maker accountability is the only thing that halts bad decision-making.
> Charge the superintendent with swatting.
This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.
Human verified the video -> human was the decision-maker. No human verified the video -> Human who gave a blank check to the AI system was the decision-maker. It's not really about the quality of journalism, here.
Please provide the quote from the story that says which of those is the case.
We're talking about who should be charged with a crime. I sincerely hope we're going to do more discovery than "ask Dexerto to summarize what WBAL-TV 11 News said".
> Police later showed him the AI-captured image that triggered the alert. The crumpled Doritos bag in his pocket had been mistaken for a gun.
That quote sorta suggests that the police got the alert, looked at the photo, and was like "yeah, that could be a gun, let's go".
Still dumb.
If the police actually thought it was a gun, what would be the "smart" thing to do?
Not at all.
Superintendent approved a system that they 100% knew would hallucinate guns on students. You assert that if the superintendent required human-in-the-loop before calling the police that the superintendent is absolved from deploying that system on students.
You are wrong. The superintendent is the person who decided to deploy a system that would lead to swatting kids and they knew it before they spent taxpayer dollars on that system.
The superintendent also knew that there is no way a school administrator is going to reliably NOT dial SWAT when the AI hallucinates a gun. No administrator is going to err on the side of "I did not see an actual firearm so everything is great even though the AI warned me that it exists." Human-in-the-loop is completely useless in this scenario. And the superintendent knows that too.
In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>. We are not close to safely betting lives on it, but people will do it immediately anyway.
No amount of telling people that AI hallucinates will get some people to believe that AI hallucinates.
"In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>."
Maybe, but that won't stop the kind of people that watch cable news from saying "if it stops one crime" or "if it saves one life".
So, are you implying that if humans surveil kids at random and call the SWAT team if a frame in a video seems to imply one kid has a gun, that then it's all OK?
Those journalists, just trying to get (unjustified, dude, unjustified!!) emotes from kids being mistakenly held at gun point, boy they are terrible.... They're just covering up how necessary those mistakes are in our pursuit of teh crime...
If security sees someone carrying a gun in surveillance video, on a gun free campus, and policy verify it, then yes, that's justified, by all aspects of the law. There are countless examples of surveillance of illegal activity resulting in police action.
Are you suggesting it's not?
Nobody saw a gun in a video. Nobody even saw something that looked like a gun. A chip bag, at most, is going to produce a bulge. No reasonable human is going to look at a kid with a random bulge in their pocket and assume gun. Otherwise we might as well start sending our kids to school naked; this is the kind of paranoia that brought us the McMartin Preschool nonsense.
Please provide the quote from the article, or link that lets you know, so confidently, that it wasn't passed through human review.
They didn't see that, though. They saw a kid with a bulge over their pants pocket, suggesting that something was in the pocket. The idea that any kind of algorithm can accurately predict that an amorphous pocket bulge is a gun is just bonkers stupid.
(Ok, ok, with thin, skin-tight, light-colored pants, maybe -- maybe -- it could work. But if it mistook a crumpled-up Doritos bag as a gun, clearly that was not the case here.)
I don't see any of this in the above article. Could you provide the link with that information in it?
Wouldn’t matter if they did. There’s no penalty for getting it wrong so the human is always incentivized to say yes and then say oops if it was wrong.
If there’s no feedback mechanism, verification doesn’t matter.
Yes, clearly the journalist was the cause of the problem here. You're an idiot.
> Make them pay money
It already cost money paying for the time and resources to be misappropriated.
There needs to be resignations, or jail time.
The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).
I wonder how much more likely it is to get a false positive from a black student.
The question is whether that Doritos carrying kid is still alive, instead of being shot at by the violent cops (who typically do nothing when an actual shooter is roaming a school on a killing spree; apropos the Uvalve school shooting, when hundreds of cops milled around the school in full body armor, refusing to engage the shooter on killing spree inside the school, and they even prevented the parents from going inside to rescue their kids) based on a false positive about a gun (and the cops must have figured that it's likely a false positive, because it is info from AI surveillance), only because he is white?
Before clicking on the article, I kinda assumed the student was black. I wouldn't be surprised if the AI model they're using has race-related biases. On the contrary, I would be surprised if it didn't.
I assume they were provide gift cards good for psychotherapy sessions.
> Make them pay money for false positives instead of direct support and counselling.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.
teenagers carrying and using guns in baltimore actually is a real problem
Sure, but this school is in the county, outside city limits. In my experience, what passes for "sketchy" in Essex MD is roughly "random dude selling pit beef out of a barrel in front of his house", i.e. fairly benign. But it's admittedly been a long while since I lived in Baltimore.
> nobody realy wants to solve problems, what they want is a marketable product
I agree, but this isn't specific to AI -- it's what makes capitalism work to the extent that it does. Nobody wants to clean a toilet or harvest wheat or do most things that society needs someone to do, they want to get paid.
Absolutely, but I don’t believe the responsibility falls in the hands of those looking to make a profit but rather into of those in charge of regulating how those profits should be made, after all thieves want to make a profit too but we don’t allow them to, at least not if it’s not a couple of millions.
In the US cops kill more people than terrorists. As long as you quantifying values take that into account.
I get that people are uncomfortable with explicit quantification of stuff like this, but removing the explicitness doesn't remove the quantification, it just makes it implicit. If, say, we allow people to drive cars even though car accidents kill n people each year, then we are implicitly quantifying that the value of the extra productivity society gets by being able to get places quickly in a car is worth the deaths of those people.
In your example, if terrorists were the only people killing people in the US, and police (a) were the only means of stopping them, and (b) did not benefit society in any other way, the equation would be simple: get rid of the police. There wouldn't need to be any value judgements, because everything cancels out. But in practice it's not that easy, since the vast majority of killings in the US occur at the hands of people who are neither police nor terrorists, and police play a role in reducing those killings too.