> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.

And some teen may be traumatized. Again, unsafe.

Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.

https://www2.ljworld.com/news/schools/2025/aug/07/lawrence-s...

Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.

These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.

https://archive.is/DYPBL

> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. “I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.

It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.

Exactly. In a saner world, we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on.

But alas, we don't live in that world. We live in a world where there will be firings, civil, and even criminal liability for those who make wrong judgments. If the AI says "possible gun", the human running things who alerts a SWAT team faces all upside and no downside.

Hmm, maybe this generation's version of "nobody ever got fired for buying IBM" will become "nobody ever got fired for doing what the AI told them to do." Maybe humanity is doomed after all.

I can't say that I think it would be a saner world to have the equivalent of a teacher or hall monitor sitting in on every conversation, even if that computer chaperone isn't going to automatically involve the cops. I don't think you can build a better society where everyone is expected to speak and behave defensively in every circumstance as if their words could be taken out of context by a snitch - computer or otherwise.

Absolutely agree, constant surveillance is something we have too much of already.

My thought when posting was, if the schools already have surveillance cameras that human security guards are watching, adding an AI to alert them to items of interest alone wasn't bad. But maybe you've changed my mind. The AI pays more invasive attention to every stream. Whereas a guard may be watching 16 feeds at once and barely be paying attention, and no one may ever even view the feed unless a crime occurs and they go looking for evidence.

Regardless this setup was way worse! The article said the AI:

> ... scans existing surveillance footage and alerts police in real time when it detects what it believes to be a weapon.

Wow, the system was designed with no human in the loop - it automatically summons armed police!

There is still liability there and it should be even higher when the decisions to implement so callously bad processes. Doubly so since this has demonstrably happened once.

>we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on

That "we could" is doing some very heavy lifting. But even if we pretend it's remotely feasible, do we want to take an institution that already trains submission to authority and use it to normalize ubiquitous "for your own good" surveillance?

Not to mention that the human in question can either accept responsibility for letting a weapon into a school, or "pass that liability on to the police". What do you think they'll do?

At least the current moment, the increasing turn to using autonomous weaponry against one’s citizens - I don’t think it says so much about humanity so much as the US. I think US foreign policy is a disaster but turning the AI-powered military against the citizenry does look like it’s going to be quite successful, presumably because the US leadership is fighting an enemy incapable of defending itself. I think it’s unsustainable though economically speaking. AI won’t actually create value once it’s a commodity itself (since a true commodity has its value baked into its price). Rates of profit will continue to fall. The ruling class will become increasingly desperate in its search for growth. Eventually an economy that resorts to techno-fascism implodes. (Not before things turning quite ugly of course.)

Actually China is far further along in "turning autonomous weaponry against one's citizens" than the US is. Ubiquitous surveillance and "social credit score" have been expanding in China since the early 2000s.

In fact one might say that what the communist parties did in the 1910s was pretty much that. Ubiquitous surveillance is the problem here, not AI. Communist states used tens of thousands of "agents" that would just walk around, listen in to random conversations, and arrest (and later torture and deport) people. Of course communist states that still exist, like China, have started using AI to do this, but it is nothing new for China and it's people.

And, of course, what these communist states are doing is protecting the rich and powerful in society, and enforcing their "vision", using far more oppressive means than even the GOP dares to dream about. Including against "socialist causes", like LGBTQ. For starters, using state violence against people for merely talking about problems, for example.

But a false dichotomy isn’t it? Authoritarian communist vs techno-fascist?

> far more oppressive means than even the GOP dares to dream about

That seems to be exactly what they are dreaming about. Something like China’s authoritarianism minus the wise stewardship of the economy, plus killer drones,

"It wasn't used as directed", says man selling Big Boom Fireworks to children.

I do not, in any way, disagree with holding Gaggle accountable for this.

But can we at least talk about also holding the school accountable for the absolutely insane response?

You talk about not selling to schools that have "zero tolerance" policies as if those are an immutable fact of nature that can never be changed, but they are a human thing that has very obvious negative effects. There is no reason we actually have to have "zero tolerance" policies that traumatize children who genuinely did nothing wrong.

"Zero tolerance" for bringing deadly weapons to school, I can understand. So long as what's being checked for is actual deadly weapons, and not just "anything vaguely gun-shaped", or "anything that one could in theory use as a deadly weapon" (I mean, that would include things like "pens" and "textbooks", so...).

"Zero tolerance" for particular kinds of language is much less acceptable. And I say this as someone who is fully in favor of eliminating things like hate speech or threats of violence—you don't do it by coming down like the wrath of God on children for a single instance of such speech, whether it was actually hate speech or not. They are in school; that's the perfect place to be teaching them a) why such speech is not OK, b) who it hurts, and c) how to express themselves without it, rather than just treating them like terrorists.

> They are suing Gaggle, who claims they never intended their system to be used that way.

Yeah, there's a shop near me that sells bongs "intended" for use with tobacco only.

> They are suing Gaggle, who claims they never intended their system to be used that way.

Is there some legal way to sue a pair of actors (Gaggle and school) then let them sue each other over who has to pay what percentage?

You separately sue everyone that might be liable. Some of the parties you sue might end up suing each other.

The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time.

All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.

> This is a paid addon, though

Holy shitballs. In my experience such pain addons have very cheap labor attached to them, certainly not what you would expect based on the sales pitch.

>...its purpose is to “prioritize safety and awareness through rapid human verification.

Oh look, a corporation refusing to take responsibility for literally anything. How passe.

The invention of the corporation is virtually to eliminate responsibility/culpability from any individual.

Human car crash? Human punishment. Corporate-owned car crash? A fine which reduces salaries some negligible percent.

Yes, corporations have all of the rights of a person, abilities beyond a person, yet few of the responsibilities of a person.

Our failure at "corporate alignment" makes it pretty clear that we're also going to fail at any version of "AI alignment"...

The two will likely facilitate eachother :(

I remember reading a quote from someone to the effect of "I'll support corporate personhood after I see Texas execute a corporation". I'm definitely misremembering but you get the sentiment, haha

They actually don't have all the rights of a person and they do have those same responsibilities.

If this company was a sole proprietorship, the only recourse this kid would have is to sue the owner, up to bankruptcy.

Since it's a corporation, his recourse is to sue the company, up to bankruptcy.

As for corporations having rights, I can explain it further if necessary but the key understanding is that the singular of "corporations are people" is "a corporation is people" not "a corporation is a person".

You can't put a corporation in prison. But a person you can. This is one of the big problems. The people making the decisions at corporations are shielded from personal consequences by the corporation. A corporation can be shut down but it rarely happens.

Even when Boeing knowingly caused the deaths of hundreds (especially the second crash was entirely preventable if they would have been honest after the first one), all they got were some fines. Those just end up being charged back to their customers, a big one being the government who fined them in the first place.

> You can't put a corporation in prison. But a person you can. This is one of the big problems.

It really isn't -- we're talking about a category of activities that involves only financial liability or civil torts in the first place, regardless of whether the parties involved are organizations or individuals. You can't put people in prison for civil torts.

Prison is irrelevant to 98% of the discussion here. And the small fraction of cases in the status quo that do involve criminal liability -- even within organizations -- absolutely do assign that liability to specific individuals, and absolutely can involve criminal penalties including jail time. Actual criminal conduct is precisely where the courts "pierce the veil" and hold individuals accountable.

> Even when Boeing knowingly caused the deaths of hundreds (especially the second crash was entirely preventable if they would have been honest after the first one), all they got were some fines.

All anyone would ever get in a lawsuit is some fines. The matter is inherently a civil one. And if there were any indications of criminal conduct, criminal liability can be applied -- as it often is -- to the individuals who engaged in it regardless of whether they are operating within an organization or on their own initiative.

The only real difference is that when you sue a large corporation, you're much more able to actually collect the damages you win than you would be if you were just suing one guy operating by himself. If the aim of justice is remunerative, not just punitive, then this is a much superior situation.

> Those just end up being charged back to their customers, a big one being the government who fined them in the first place.

Who would be paying to settle the matter in your preferred situation? It sounds like the most likely outcome is that the victims would just eat the costs they've already incurred, since there'd be little chance of collecting damages, and taxpayers would bear the burden of paying for the punishment of whomever ends up holding the hot potato after all the scapegoating and blame deflection plays out.

> Even when Boeing knowingly caused the deaths

Since corporations aren't people, Boeing didn't know anything.

Did someone at Boeing have all of that knowledge?

I'm sure the top leadership was well aware of what happened after the first crash yes. They should have immediately gone public and would have prevented the second crash.

Don't forget that hiding MCAS from pilots and the FAA was a conscious decision. It wasn't something that 'just happened'. The decision to not make it depend on redundant AoA sensors by default too.

My point is, I can imagine that the MCAS suicidal side-effect was something unexpected (it was a technical failure edge-case in a specific and rare scenario) and I get that not anticipating it could have been a mistake, not a conscious decision. But after the first crash they should have owned up to it and not waited for a second crash.

And who even cares if they knew?

Extenuating circumstances, at best.

> Since corporations aren't people, Boeing didn't know anything.

you have to recognize that a statement like this means that decision-makers at boeing either knew or were negligent in their duties.

Which is a hell of a thing to say without evidence.

i can’t think of another option without giving them more credit than they deserve.

A drunk driver doesn't get to claim that they didn't know someone was in front of their car.

You need a judge and jury for prison sentences for criminal convictions.

If the government decides to prosecute the matter as a civil infraction, or doesn't even bother prosecuting but just has an executive agency hand out a fine, that's not a matter of the corporation shielding people, that's a matter of the government failing to prosecute or secure a conviction.

If the company is a sole proprietorship, you can sue the person who controls it up to bankruptcy, which will affect their personal life significantly. If the company is a corporation/LLC, you can sue the corporate entity up to the bankruptcy of the corporate entity, while the people controlling the company remain unaffected.

This gets even more perverse. If you're an individual you actually can't just set up an LLC to limit your own liability. There's no manner for an individual to say "I'm putting on a hat and acting solely as the LLC" - rather as the owner you need to find and employ enough judgement-proof patsies that the whole thing becomes a "group project" and you can say you personally weren't aware of whatever problem gave rise to liability. In other words, the very design of corporations/LLCs encourages avoiding responsibility.

You're correct with the nitpick about the Supreme Council's justification, but that justification is still poor reasoning. Corporations are government-created liability shields. How they can direct their employees should be limited, to avoid trampling on those individuals' own natural rights. A person or group of people who want to exercise their personal natural rights through hired employees can always forgo the government-created liability shield and go sole proprietorship / gen partnership.

> If the company is a sole proprietorship, you can sue the person who controls it up to bankruptcy, which will affect their personal life significantly.

I'm sure it will. But how do you collect $30M in damages from a single individual whose entire net worth is e.g. $1M? What if the sole proprietor actually owns no assets whatsoever, because he's set up a bunch of arrangements where he leases everything from third parties, and contracts out his business operations to a different set of third parties, etc.?

I don't get why so many people are so intent on trying to attribute the motivations to maximize one's own take, deflect blame for harm away from themselves, and cover up their questionable activities to some specific organizational model. All of those motivations come from the human beings involved -- they were always present and always will be -- and those same human beings will manipulate whatever rules or institutions are involved to the greatest extent that they can.

Blaming a particular organizational model for the malicious intentions of the people who are just using that model as a tool is a deep, deep error.

> If you're an individual you actually can't just set up an LLC to limit your own liability.

What are you talking about? Of course you can. People do it all the time.

> rather as the owner you need to find and employ enough judgement-proof patsies that the whole thing becomes a "group project" and you can say you personally weren't aware of whatever problem gave rise to liability.

You're conflating entirely unrelated concepts of liability here. Limited liability as it relates to LLCs and corporations is for financial liability. It means that the organizations debts are not the shareholders' debts. It has nothing to do with legal liability for one's own purposeful conduct, whether tortious or criminal.

The kind of liability protection that you think corporations enjoy but single-member LLCs don't -- protection from the liability for individual criminal behavior -- does not exist for anyone at all.

> A person or group of people who want to exercise their personal natural rights through hired employees can always forgo the government-created liability shield and go sole proprietorship / gen partnership.

The ownership structure of a business has nothing at all to do with how it hires employees and directs their activities. The same law of agency and doctrine of vicarious liability applies to all agent-principal relationships regardless of whether the principal is a corporation or a sole proprietorship.

> how do you collect $30M in damages from a single individual

It's not about getting made whole from damages, it's about the incentives for the business owner. A sole proprietor has their own skin fully in the game, whereas an LLC owner does not (only modulo things customarily shielded from bankruptcy like retirement savings and primary dwelling, and asset protection strategies for the extremely rich, like charitable foundations)

> I don't get why so many people are so intent on trying to attribute the motivations to maximize one's own take, deflect blame for harm away from themselves, and cover up their questionable activities to some specific organizational model

Because this specific legal structure (not organizational model, that is orthogonal) is a powerful tool for deflecting blame.

> You're conflating entirely unrelated concepts of liability here... It has nothing to do with legal liability for one's own purposeful conduct, whether tortious or criminal

The point is that these concepts are quite intertwined for small businesses, and only become distinct when there are enough people involved to make a nobody's-fault "group project". Let's say I want to own a piece of rental property and think putting it in an LLC will protect my personal life from all the random things that might happen playing host to other people's lives. Managing one property doesn't take terribly much time so I do it myself. Now it snows, the tenant does a crappy job of shoveling, and someone slips on the sidewalk up front, gets hurt, and sues. Since I'm personally involved in supervising the condition of the property, there is now a theory of personal liability for me that I should have been aware of the poor conditions of the sidewalk. (This same liability applies to the tenant, or anyone that was hired to shovel, but they're usually judgement proof, sympathetic, etc).

Same thing with making repairs to the property, etc - any direct involvement (supplying anything but investment capital) opens up avenues for personal liability, negating the LLC protections.

> The same law of agency and doctrine of vicarious liability applies

The point is that LLC/corporate structures allow for much higher levels of scaling, allowing them to apply higher levels of coercion to their employees. Since these limited liability structures are purely creations of government (rather than something existing outside of government), it's straightforwardly justifiable to regulate what activities they may engage in to mitigate this coercion.

Unfortunately the company has a big war chest, and I have a small war chest, and was priced out of court through legal shenanigans and delays the corporations lawyers could afford.

Just bring back fucking pistol deals. I have a better chance of defending myself there.

Don't forget paying their way out of crimes and no applicability to three strikes laws.

> a corporation refusing to take responsibility for literally anything. How passe

Versus all the natural people at the highest echelons of our political economic system valiantly taking responsibility for fuckall?

> Versus all the natural people

We can at least hold them responsible.

> We can at least hold them responsible

We don’t. (We can also hold corporations responsible. We seldom do.)

The problem isn’t in the form of legal entity fraud and corruption wears.

Fair enough, but it is much harder to hold a corporation responsible.

Jail is a great deterrent for natural persons.

Jail is a great deterrent against criminal conduct. But natural persons are already risking jail when they engage in criminal conduct regardless of whether they're doing so within the scope of an organization or doing so on their own initiative.

Jail isn't on the table for financial liability or civil torts in the first place, and since pretty much all the forms of liability involving commercial conduct we're discussing here are financial liability or civil torts, it's not really relevant to the discussion.

> it is much harder to hold a corporation responsible

In some ways, yes. In most ways, no. In most cases, a massive fine aligns interests. Our problem is we've become weak kneed at levying massive fines on corporations.

Unlike a person, you don't have to house a corporation to punish it. Your fine simply wipes out the owners. If the enterprise is a going concern, it's born under new ownership. If it's not, its assets are redistributed.

> Jail is a great deterrent for natural persons

Jail works for executives who defraud. We just, again, don't do it. This AI could have been sold by a billionaire sole proprietor, I doubt that would suddenly make the rules more enforceable.

It's probably just US culture of "if you aren't cheating you aren't trying to win hard enough".

You can try, but you might be unknowingly holding their carefully designated scapegoat responsible instead.

I certainly didn't imply that to be the case and I'm not sure how you could draw that conclusion from 2 whole sentences.

Engineer: hey I made this cool thing that can help people in public safety roles process information and make decisions more efficiently! It gives false positives but you save more time than it takes less time to weed through them.

Someone nearby: well what if they use it to replace human thinking instead of augment it?

Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.

Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.

Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…

::6 months later—some kid is being held at gunpoint over snacks.::

Nice fantasy, but the reality is that the "people in public safety roles" love using flimsy pretenses to harass and abuse vulnerable populations. I wish it was just overeager sales and marketing, but you're view of humanity is way too naive especially as masked thugs are disappearing people in the street as we type.

What? A) The naïveté of the engineer’s perspective was literally the whole point of the story. B) Saying I’m somehow absolving law enforcement by acknowledging other factors is absurd. My childhood best friend was shot and killed by police during a mental health crisis. C) If you think that police malevolence somehow absolves the tech world’s role in making tools for them, that’s as naive as it gets.

Refer to the post office scandal in Britain and the robodebt debacle in Australia.

The authorities are just itching to have their brains replaced with by dumb computer logic, without regard for community safety and wellbeing.

Lack of Accountability as-a-Service! Very attractive proposition to negligent and self-serving organizations. The people in charge don't even have to pay for it themselves, can just funnel the organization money to the vendor. Encouraging widespread adoption helps normalizes the practice. If anyone objects, shut them down as not thinking-of-the-children and something-must-be-done (and every other option is surely too complicated/expensive).

And the black box sentencing recommendation systems some US states bought into like a decade ago.

It’s actually “AI swarmed” since no human reasoning, only execution, was exerted - basically have an AI directing resources.

delegating decision to AI, excluding human from the "human in the loop" is kind of unexpected as a first step, as in general it was expected that exclusion will start from the other end. Sideway i wonder how is that going to happen on the battlefield.

for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.

Reverse Centaur. MANNA.

3-in-1. Lack.

when attacked by bees am I hive swarmed?

In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.

But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.

Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.

In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.

> the primary cause of gun violence in the first place: the ubiquity of guns in our society

I would have gone with “a normalized sense of hopelessness and indignity which causes people to feel like violence is the only way they can have any agency in life” considering “gun” is the adjective and “violence” is the actual thing you're talking about.

Both are true. The underlying oppressive, lonely, pro-bullying culture creates the tension. The proliferation of high lethality weapons makes it more likely that tension will eventually release in the form of a mass tragedy.

Improvement in either area would be a net positive for society. Improvement in both areas is ideal but solving proliferation seems a lot more straightforward than fixing the generally miserable society problem.

I think there’s probably some correlation between ‘generally miserable society’ and ‘we think it’s ok to have children surveiled by AI’

I tend to categorize these under a dutch idiom which I can’t describe, but which is abundantly clear in pictorial form:

https://klimapedia.nl/wp-content/uploads/2020/01/Dweilen_met...

"Treating the symptoms not the cause" would be the english equivalent.

(for others: the Dutch expression is "Dweilen met de kraan open", "Mopping with the tap open")

To be clear, the false negative here would be a student who has brought a gun to a school and the computer ignores it. That is a situation where potentially multiple people can be killed in a short amount of time. It is not far, far worse to send cops.

Depends on the false positive rate doesn't it. If police are being sent to storm a school every week due to a false positive, that is quite bad. And people will become conditioned to not care about reports of a gun at a school because of all the false positives.

For what I’m saying, no it doesn’t because I’m just comparing a single instance of false positive to a single instance of false negative. Neither is desirable.

> But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.

Given the probability of police officers in the USA taking any action as hostile and then ending up shooting him a false positive here is the same as swatting someone.

The system here sent the police off to kill someone.

Yep. Think of it as the new exciting version of swatting. Naturally, one will still need to figure out common ways to force a specific misattribution, but, sadly, I think there will be people working on it ( if there aren't already ).

[dead]

Sure. But school shootings are also common in the US. A student who has brought a gun to a school is very likely not harmless. So false negatives aren’t free either.

What's the proportion of gun-carrying to shooting in schools?

Well guns aren’t allowed in schools at all. It’s a felony. So if your point is that the ratio is low, that’s only because the denominator is way too big.

No point, a question.

I'd suspect kids would take guns to 'be cool', show friends, make threats without intention to actually use them. Also, intention to harm that wasn't followed through; intention to defend themselves if threatened; other reasons?

Probably no sound stats, but I'm curious about it, so asked.

Considering the slices of the socioeconomic ladder mostly involved here, I'd bet that "it won't grow legs if it's on me" dwarfs all other motives for bringing guns to school.

I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.

We answered the screams at the door to guns pointed at our faces, and countless cops.

It was explained to us that this was the restrained version. We got a knock.

Unfortunately, I understand why these responses can't be neutered too much. You just never know.

In this case, though, you COULD know, COULD verify with a human before pointing guns at people, or COULD not deploy a half finished product in a way that prioritizes your profit over public safety.

s/COULD/SHOULD/g

Happened to a friend of mine by an ex GF who said he was on psych meds (true though he is nonviolent with no history) and that he was threatening to kill his parents. NYPD SWAT no-knock kicked the door down to his apartment which terrorized his elderly parents as they pointed guns at their son (in his words, "machine guns".) BUT because he has psych issues and on meds he was forced into a cop car in front of the whole neighborhood to get a psych evaluation. He only received an apology from the cops who said they have no choice but to follow procedure.

edit should add sorry to hear that.

[deleted]

> who said he was on psych meds (true though he is nonviolent with no history)

I don't understand the connection here

[deleted]

Do the cops not ever get tired of being fooled like this? Or do they just enjoy the chance to go out in their army-surplus armored cars and pretend to be special forces?

I had convos with cops about swatting, the good ones aren't happy to go kick down someone's door who isn't about to harm someone but feel they can't chance making a fatally wrong call when it isn't swatting, also they have procedures to follow and if they don't the outcome is on them personally and potentially legally.

As for bad cops they look for any reason to go act like aggro billy badasses.

> the good ones ...

uh-huh

> if they don't the outcome is on them personally and potentially legally.

Bullshit, they're rarely held accountable when they straight up murder people, and even then "accountable" is "have to go get a different job". https://en.wikipedia.org/wiki/Killing_of_John_T._Williams

ACAB

It seems entirely in line to not be held accountable for terrorizing/murdering people when you are held accountable for doing the opposite?

It just means the police force is an instrument of terror.

>It just means the police force is an instrument of terror.

always had been dot jpeg.

[deleted]

[dead]

This is a really good question. Sadly the answer is that they think it's how the system is meant to work. Well that seems to be the answer that I see coming from police spokespeople

Its likely procedure that they have to follow (see my other post in this thread.)

I hate to say this but I get it. Imagine a scenario happens where they decide "sounds phony. stand down." only for it to be real and people are hurt/killed because the "cops ignored our pleas for help and did nothing." which would be a horrible mistake they could be liable for, never mind the media circus and PR damage. So they treat all scenarios as real and figure it out after they knock/kick in the door.

To that end, we should all have a cop assigned to us. One cop per citizen, with a gun pointed at our head at all times. Imagine a scenario happens where someone does something and that cop wasn't there? Better to be safe.

Why stop at one? Imagine how much safer we’d be with TWO cops per citizen! And all those extra jobs that would be created!

And then cops for the cops!

I don't think you know how policing works in America. To cops, there are sheep, sheepdogs, and wolves; they are sheepdogs protecting us sheep from the criminals. Nobody needs to watch the sheepdogs!

But lets think about their analogy a little more: sheepdogs and wolves are both canines. Hmm.

Also "funny" how quickly they can reclassify any person as a "wolf", like this student. Hmm.

> Nobody needs to watch the sheepdogs!

A sheepdog that bites a sheep for any reason is killed.

Maybe we should move beyond binary thinking here. Yeah, it's worth sending someone to investigate but also making some effort to verify who the call is coming from - to get their identity, and to ask them something simple like to describe the house (in this example) so the arriving cops will know they go to the right address. Now of course you can get a description of the house with Google Street Maps, but 911 dispatchers can solicit some information like what color car is currently parked outside or suchlike. They could also look up who occupies the house and make a phone call while cops are on the way.

Everyone knows swatting is a real thing that happens and that it's problematic, so why don't police departments have procedures in place which include that possibility? Who benefits from hyped-up police responses to false claims of criminal activity?

Yes, there's a middle ground here.

My daughter was swatted, but at the time she lived in a town where the cops weren't militarized goon squads. What happened was two uniformed cops politely knocked on her door, had a chat with her, and asked if they could come in and look around. She allowed them, they thanked her and the issue was resolved.

This is the way. Investigate, even a little, before deploying great force.

Cops don't have a duty to protect people, so "cops ignored our pleas for help and did nothing" is a-ok, no liability (thank you, qualified immunity). They very much do not treat all scenarios as real; they go gung-ho when they want to and hang back for a few hours "assessing the situation" when they don't.

> they go gung-ho when they want to and hang back for a few hours "assessing the situation" when they don't.

Yeah. They were happy to take their sweet time assessing everything safely outside the buildings at Uvalde.

I'm a paramedic, who has personally attended a swatting call where every single detail was so egregiously wrong, but police still went in, no-knock, causing thousands of dollars damage, that, to be clear, they have absolutely zero liability for, but thankfully no injuries.

"I can see them in the upstairs window" - of a single story home.

"The house is red brick" - it was dark grey wood.

"No cars in the driveway" - there was two.

Cops still said "hmm, still could be legit" and battered down the front door, deployed flashbangs.

There are more options here than "do nothing" and "go in guns blazing".

Establishing the probable trustworthiness of the report isn't black magic. Ask the reportee for details, question the neighbours, look in through the windows, just send two plain clothed officers pretending to be salesmen to knock on the door first? Continously adjust the approach as new information comes in. This isn't rocket science, ffs.

See my other comment in this thread. I've personally witnessed trying to ask the caller verifying details because dispatchers were suspicious.

Even with multiple major discrepancies, police still decided they should go in, no-knock.

It doesn't make sense. If you were holding people hostage, you'd have demands for their release. Windows could be peeked into. If you dragged a dead body into a house, there'd be evidence of that.

[deleted]
[deleted]

False positives can effectively lead to false negatives too. If too many alarms end in teens getting swatted (or worse) for eating chips, people might ignore the alarm if an actual school shooter triggers it. Might assume the AI is just screaming about a bag of chips again.

I think a “true positive” is an issue as well if the protocol to manage it isn’t appropriate. If the kid was armed with something other than nacho cheese, the provocative reaction could have easily set off a tragic chain of events.

Reality is there are guns in schools every day. “Solutions” like this aren’t making anyone safer. School shooters don’t fit this profile - they are planners, not impulsive people hanging out at the social event.

More disturbing is the meh attitude of both the company and the school administration. They almost engineered a tragedy through incompetence, and learned nothing.

>And some teen may be traumatized.

Um. That's not really the danger here.

The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.

This tech is not supposed to be used in this fashion. It's not ready.

Did you want to emphasize or clarify the first danger I mentioned?

My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.

When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.

I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.

I’d argue the second danger is worse, because shooting might be incidental (and up to human judgement) but being traumatized is guaranteed and likely to be much more frequent.

I fully agree, but we also really need to get to a place where drawing the attention of police isn't an axiomatically life-threatening situation.

If the US wasn't psychotic, not all police would have to be armed, and not every police response would be an armed response.

Even if not all police were armed, the response to "AI said someone has a gun" would always be the armed police

Why would it not be "human reviews the image that the AI said was a gun"?

The entire selling point of AI is to not have humans in the loop.

Even despite the massive protests in the past few years, we're moving further in that direction.

[deleted]

Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.

Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.

[0]Even though no other free society has to pay that price but whatever.

Far more deaths by automobile than homicides by guns.

In the US, guns and automobiles kill roughly the same number of people each year.

Guns are actually easier to control and significantly reduce ability to target multiple people at once. There are a lot of countries successfully controlling guns.

To the argument that then only criminals have guns - in India at least, criminals have very limited access to guns. They have to resort to unreliable handmade guns which are difficult to procure. Usually criminals use knives and swords due to that.

> Guns are actually easier to control

This would not be the case in the US.

[deleted]

> The danger is that it's as clear as day that in the future someone is gonna be killed.

This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.

So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)

> This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.

Is HN really this ready to dive into obvious logical fallacies?

My original comment, currently sitting at -4, has so far attracted one guilt-by-association plus implied-threat combo, and no other replies. To remind readers: My horrifying proposal was to measure both the risks and the benefits of things.

If anyone genuinely thinks measuring the risks and benefits of things is a bad idea, or that it is in general a good idea but not in this specific case, please come forward.

> Is HN really this ready to dive into obvious logical fallacies?

No, which is why your comment was downvoted - the following is a fallacy:

> This argument can be made about almost every technology,

That's the continuum fallacy.

No, it isn't the continuum fallacy.

I'm not claiming that a continuous range exists, and that one end cannot be distinguished from the other because the slope between those points is gradual. I'm claiming that there is a category, called technology, and everything in that category is subject to that argument.

If you want to dispute that, it's incumbent on you to provide evidence for why some technology subcategories should not be subject to that argument.

Specifically: You need to present a case for why AI devices like the one discussed in TFA should not be evaluated in terms of their risks and benefits to society.

Good luck with that argument.

sorry for being glib; it was low hanging fruit. my actual point should have been more clearly stated: measuring risk/benefit is really complicated because there's almost never a direct comparison to be made when balancing profit, operational excellence and safety.