This may be mean, but we should really be careful about just handing AI over to technically illiterate people. They're far more likely to blind trust the LLM/AI output than someone who may be more experienced and take a beat. AI in an agentic-state society (what we have in America at least) is an absolute ticking time bomb. Honestly, this is what AI safety teams should be concentrated on: making sure people who think the computer is infallible understand that, no, it isn't, and you shouldn't just assume what it tells you is correct.

We already handed over the Internet to technically illetrate people long time ago.

It's basically a failure of setting up the proper response playbook.

Instead of:

1. AI detects gun on surveillance

2. Dispatch armed police to location

It should be:

1. AI detects gun on surveillance

2. Human reviews the pictures and verifies the threat

3. Dispatch armed police to location

I think the latter version is likely what already took place in this incident, and it was actually a human that also mistook a bag of Doritos for a gun. But that version of the story is not as interesting, I guess.