https://archive.is/DYPBL

> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. “I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.

It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.

Exactly. In a saner world, we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on.

But alas, we don't live in that world. We live in a world where there will be firings, civil, and even criminal liability for those who make wrong judgments. If the AI says "possible gun", the human running things who alerts a SWAT team faces all upside and no downside.

Hmm, maybe this generation's version of "nobody ever got fired for buying IBM" will become "nobody ever got fired for doing what the AI told them to do." Maybe humanity is doomed after all.

I can't say that I think it would be a saner world to have the equivalent of a teacher or hall monitor sitting in on every conversation, even if that computer chaperone isn't going to automatically involve the cops. I don't think you can build a better society where everyone is expected to speak and behave defensively in every circumstance as if their words could be taken out of context by a snitch - computer or otherwise.

Absolutely agree, constant surveillance is something we have too much of already.

My thought when posting was, if the schools already have surveillance cameras that human security guards are watching, adding an AI to alert them to items of interest alone wasn't bad. But maybe you've changed my mind. The AI pays more invasive attention to every stream. Whereas a guard may be watching 16 feeds at once and barely be paying attention, and no one may ever even view the feed unless a crime occurs and they go looking for evidence.

Regardless this setup was way worse! The article said the AI:

> ... scans existing surveillance footage and alerts police in real time when it detects what it believes to be a weapon.

Wow, the system was designed with no human in the loop - it automatically summons armed police!

There is still liability there and it should be even higher when the decisions to implement so callously bad processes. Doubly so since this has demonstrably happened once.

>we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on

That "we could" is doing some very heavy lifting. But even if we pretend it's remotely feasible, do we want to take an institution that already trains submission to authority and use it to normalize ubiquitous "for your own good" surveillance?

Not to mention that the human in question can either accept responsibility for letting a weapon into a school, or "pass that liability on to the police". What do you think they'll do?

At least the current moment, the increasing turn to using autonomous weaponry against one’s citizens - I don’t think it says so much about humanity so much as the US. I think US foreign policy is a disaster but turning the AI-powered military against the citizenry does look like it’s going to be quite successful, presumably because the US leadership is fighting an enemy incapable of defending itself. I think it’s unsustainable though economically speaking. AI won’t actually create value once it’s a commodity itself (since a true commodity has its value baked into its price). Rates of profit will continue to fall. The ruling class will become increasingly desperate in its search for growth. Eventually an economy that resorts to techno-fascism implodes. (Not before things turning quite ugly of course.)

Actually China is far further along in "turning autonomous weaponry against one's citizens" than the US is. Ubiquitous surveillance and "social credit score" have been expanding in China since the early 2000s.

In fact one might say that what the communist parties did in the 1910s was pretty much that. Ubiquitous surveillance is the problem here, not AI. Communist states used tens of thousands of "agents" that would just walk around, listen in to random conversations, and arrest (and later torture and deport) people. Of course communist states that still exist, like China, have started using AI to do this, but it is nothing new for China and it's people.

And, of course, what these communist states are doing is protecting the rich and powerful in society, and enforcing their "vision", using far more oppressive means than even the GOP dares to dream about. Including against "socialist causes", like LGBTQ. For starters, using state violence against people for merely talking about problems, for example.

But a false dichotomy isn’t it? Authoritarian communist vs techno-fascist?

> far more oppressive means than even the GOP dares to dream about

That seems to be exactly what they are dreaming about. Something like China’s authoritarianism minus the wise stewardship of the economy, plus killer drones,

"It wasn't used as directed", says man selling Big Boom Fireworks to children.

I do not, in any way, disagree with holding Gaggle accountable for this.

But can we at least talk about also holding the school accountable for the absolutely insane response?

You talk about not selling to schools that have "zero tolerance" policies as if those are an immutable fact of nature that can never be changed, but they are a human thing that has very obvious negative effects. There is no reason we actually have to have "zero tolerance" policies that traumatize children who genuinely did nothing wrong.

"Zero tolerance" for bringing deadly weapons to school, I can understand. So long as what's being checked for is actual deadly weapons, and not just "anything vaguely gun-shaped", or "anything that one could in theory use as a deadly weapon" (I mean, that would include things like "pens" and "textbooks", so...).

"Zero tolerance" for particular kinds of language is much less acceptable. And I say this as someone who is fully in favor of eliminating things like hate speech or threats of violence—you don't do it by coming down like the wrath of God on children for a single instance of such speech, whether it was actually hate speech or not. They are in school; that's the perfect place to be teaching them a) why such speech is not OK, b) who it hurts, and c) how to express themselves without it, rather than just treating them like terrorists.