If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.
If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.
Actually, if a system has too many false positives or false negatives, it's basically useless. There will eventually be doubts amongst the operators of it and the whole thing will implode, which is the best possible outcome.
We already went through this years ago with all those terrorism databases and we (humanity) have learned nothing--any database will have a percentage of erroneous data, it is impossible to eliminate erroneous data completely. Therefore, any database used to identify <fill in the blank> will have erroneous conclusions. It's been observed over and over again and governments can't help themselves that "this time it will be different because <fill in the blank> e.g. AI.