>we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on
That "we could" is doing some very heavy lifting. But even if we pretend it's remotely feasible, do we want to take an institution that already trains submission to authority and use it to normalize ubiquitous "for your own good" surveillance?
Not to mention that the human in question can either accept responsibility for letting a weapon into a school, or "pass that liability on to the police". What do you think they'll do?