>And some teen may be traumatized.
Um. That's not really the danger here.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
Did you want to emphasize or clarify the first danger I mentioned?
My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.
When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.
I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.
I’d argue the second danger is worse, because shooting might be incidental (and up to human judgement) but being traumatized is guaranteed and likely to be much more frequent.
I fully agree, but we also really need to get to a place where drawing the attention of police isn't an axiomatically life-threatening situation.
If the US wasn't psychotic, not all police would have to be armed, and not every police response would be an armed response.
Even if not all police were armed, the response to "AI said someone has a gun" would always be the armed police
Why would it not be "human reviews the image that the AI said was a gun"?
The entire selling point of AI is to not have humans in the loop.
Even despite the massive protests in the past few years, we're moving further in that direction.
Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.
Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.
[0]Even though no other free society has to pay that price but whatever.
Far more deaths by automobile than homicides by guns.
In the US, guns and automobiles kill roughly the same number of people each year.
Guns are actually easier to control and significantly reduce ability to target multiple people at once. There are a lot of countries successfully controlling guns.
To the argument that then only criminals have guns - in India at least, criminals have very limited access to guns. They have to resort to unreliable handmade guns which are difficult to procure. Usually criminals use knives and swords due to that.
> Guns are actually easier to control
This would not be the case in the US.
> The danger is that it's as clear as day that in the future someone is gonna be killed.
This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.
So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)
> This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.
Is HN really this ready to dive into obvious logical fallacies?
My original comment, currently sitting at -4, has so far attracted one guilt-by-association plus implied-threat combo, and no other replies. To remind readers: My horrifying proposal was to measure both the risks and the benefits of things.
If anyone genuinely thinks measuring the risks and benefits of things is a bad idea, or that it is in general a good idea but not in this specific case, please come forward.
> Is HN really this ready to dive into obvious logical fallacies?
No, which is why your comment was downvoted - the following is a fallacy:
> This argument can be made about almost every technology,
That's the continuum fallacy.
No, it isn't the continuum fallacy.
I'm not claiming that a continuous range exists, and that one end cannot be distinguished from the other because the slope between those points is gradual. I'm claiming that there is a category, called technology, and everything in that category is subject to that argument.
If you want to dispute that, it's incumbent on you to provide evidence for why some technology subcategories should not be subject to that argument.
Specifically: You need to present a case for why AI devices like the one discussed in TFA should not be evaluated in terms of their risks and benefits to society.
Good luck with that argument.
sorry for being glib; it was low hanging fruit. my actual point should have been more clearly stated: measuring risk/benefit is really complicated because there's almost never a direct comparison to be made when balancing profit, operational excellence and safety.