I wonder how much more likely it is to get a false positive from a black student.

The question is whether that Doritos carrying kid is still alive, instead of being shot at by the violent cops (who typically do nothing when an actual shooter is roaming a school on a killing spree; apropos the Uvalve school shooting, when hundreds of cops milled around the school in full body armor, refusing to engage the shooter on killing spree inside the school, and they even prevented the parents from going inside to rescue their kids) based on a false positive about a gun (and the cops must have figured that it's likely a false positive, because it is info from AI surveillance), only because he is white?

Before clicking on the article, I kinda assumed the student was black. I wouldn't be surprised if the AI model they're using has race-related biases. On the contrary, I would be surprised if it didn't.