It’s unsurprising, since this kind of classification is only as good as the training data.

And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).

So if you’re gonna automate broken systems, you’re going to see a lot more of the same.

I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.