> Make them pay money for false positives instead of direct support and counselling.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.
teenagers carrying and using guns in baltimore actually is a real problem
Sure, but this school is in the county, outside city limits. In my experience, what passes for "sketchy" in Essex MD is roughly "random dude selling pit beef out of a barrel in front of his house", i.e. fairly benign. But it's admittedly been a long while since I lived in Baltimore.
> nobody realy wants to solve problems, what they want is a marketable product
I agree, but this isn't specific to AI -- it's what makes capitalism work to the extent that it does. Nobody wants to clean a toilet or harvest wheat or do most things that society needs someone to do, they want to get paid.
Absolutely, but I don’t believe the responsibility falls in the hands of those looking to make a profit but rather into of those in charge of regulating how those profits should be made, after all thieves want to make a profit too but we don’t allow them to, at least not if it’s not a couple of millions.
In the US cops kill more people than terrorists. As long as you quantifying values take that into account.
I get that people are uncomfortable with explicit quantification of stuff like this, but removing the explicitness doesn't remove the quantification, it just makes it implicit. If, say, we allow people to drive cars even though car accidents kill n people each year, then we are implicitly quantifying that the value of the extra productivity society gets by being able to get places quickly in a car is worth the deaths of those people.
In your example, if terrorists were the only people killing people in the US, and police (a) were the only means of stopping them, and (b) did not benefit society in any other way, the equation would be simple: get rid of the police. There wouldn't need to be any value judgements, because everything cancels out. But in practice it's not that easy, since the vast majority of killings in the US occur at the hands of people who are neither police nor terrorists, and police play a role in reducing those killings too.