The article doesn't confirm that there was definitely a human in the loop, but it sorta suggests that police got a chance to manually verify the photo before going out to harass this poor kid.

I suspect, though, that the AI flagging that image heavily influenced the cop doing the manual review. Without the AI, I'd expect that a cop manually watching a surveillance feed would have found nothing out of the ordinary, and this wouldn't have happened.

So I agree that it's weird to just blame the human in the loop here. Certainly they share blame, but the fact of an AI model flagging this sort of thing (and doing an objectively terrible job of it) in the first place should take most of the blame here.