AI is a liability issue waiting to happen. And this is just another example.

It's the opposite, it's absolution from liability. "The AI did it" is the ultimate excuse to avoid accepting responsibility and consequences.

Courts are already refusing to accept this excuse.

https://pub.towardsai.net/the-air-gapped-chronicles-the-cour...

Good to know it's not a fait accompli yet, but I wont be surprised to see corporations pushing for this hard.

... which is why the institutions that assign responsibility and consequences need to make it really clear that excuse won't fly. With illustrative examples.

It’s a tool. Used incorrectly will lead to errors. Just like a hammer, used incorrectly could hit the users finger.

There is enormous variability in how hard a tool is to use correctly, how likely it is to go wrong, and how severe the consequences are. AI has a wide range on all those variables because its use cases vary so widely compared to a hammer.

The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.

When somebody uses a tool to hurt somebody, they need to be held accountable. If I smack you with a hammer, that needs to be prosecuted. Using AI is no different.

The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.

Systems are also a tool. Whoever institutes and helps build the system that systematically results in harm is also responsible.

That would be the vendors, the system planners, and the institutions that greenlit this. It would also include the larger financial tech circle that is trying to drive large scale AI adoption. Like Peter Thiel, who sees technology as an "alternative to politics". I.e. a way to circumvent democracy [1]

[1] https://stavroulapabst.substack.com/p/techxgeopolitics-18-te...

Nonsense. The manufacturer, distributor, and vendor of a hammer are not liable for its misuse. We already litigated and then legislated this regarding guns in the US.

As much as I detest Clearview and Thiel the fault for this incident falls squarely on the justice system.

This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.

The tool is unethical, not broken. And unfortunately remains legal for the time being. To that end it's a social or political problem that can be fixed.

The tool, like Google search, is likely biased towards returning results regardless of confidence.

Used incorrectly will lead to errors.

Only one small little problem --- there is no way to tell if you are using it "correctly".

The only way to be sure is to not use it.

Using it basically boils down to, "Do you feel lucky?".

The Fargo police didn't get lucky in this case. And now the liability kicks in.

Some basic investigatory police work (the kind they did before AI) would have revealed the mistake before an innocent woman’s life was destroyed.

Yes. But doing the investigation negates much of the incentive for using AI.

Look for similar to play out elsewhere --- using unreliable tools for decision making is not a good, responsible business plan. And lawyers are just waiting to press the point.

In this case it sounds as though AI could have been used to generate preliminary leads. When someone calls a tip line with information, police don’t just take their word for it, they investigate it. They know that tips they receive may be incorrect. They should have done the exact same here, but they didn’t.

I’m very opposed to AI in general, but this one is clearly human failure.

The noteworthy AI angle is the undeserved credence police gave to AI information. But that is ultimately their failure; they should be investigating all information they receive.

...but this one is clearly human failure.

Absolutely.

The failure starts with tool vendors who market these statistical/probabilistic pattern searchers as "intelligent". The Fargo police failed to fully evaluate these marketing claims before applying them to their work.

So in the same way that the failure rolled down hill, liability needs to roll back up.

AI can provide leads. Someone still needs to verify them and decide.

Generating and verifying bad leads costs money. Not verifying bad leads can cost much more.

At some point, you have to decide if wasting good money on bad intel makes sense.

The article says that the Fargo police claimed to have done "additional investigative steps independent of AI". (Perhaps they're lying, or did a poor job because they thought the extra steps were a formality.)

Now the "qualified" immunity kicks in.

We will find out. But relying on AI is likely to cost the city of Fargo in one way or another. They say they have already stopped using AI and returned to good old fashioned human investigation.

https://www.lawlegalhub.com/how-much-is-a-wrongful-arrest-la...

Look, I'm generally considered AI's most vociferous detractor.

But...

> there is no way to tell if you are using it "correctly".

This simply isn't true, at least in cases like this.

I know common sense isn't really all that common, but why would you give more credence to an untested tool than an untested crack-addled human informant?

The entire point of the informant, or the AI in this instance, is to generate leads. Which subsequently need to be checked.

There is no "correct" way to use AI in order to avoid bad results. The only prudent approach is to assume all results are bad until proven otherwise.

But this approach negates much of the incentive to pay for questionable results.

> The only prudent approach is to assume all results are bad until proven otherwise.

As is true with results from people.

> But this approach negates much of the incentive to pay for questionable results.

I'm not sure that follows. Even the crack-addled human informant has always been paid for questionable results.

What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.

Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.

Otherwise, I'd say it's an extremely lazy argument

Unlike hammers people preface things with "claude says", etc. I never see that kind of distancing with tools that aren't AI.

Dynamite is a tool. But we don't hand it out to anyone who wants to play with it.

We used to until quite recently. Anybody could buy dynamite at the hardware store. We had to end this because of criminals using it to hurt people.

I admit I was surprised to see you could buy dynamite in a hardware store until 1970.

[deleted]

Look for AI to follow a similar trajectory over time.

Impossible at this point. You cannot download dynamite.

Yes, regulation is inevitable.

Regulation is impossible. The AI barons literally control the federal government, so not even state regulations get tried.

Except this time the criminals are police.

They are far more often than anyone wants to admit. That's how we got 25% of the world's prison population.

AFAIK the actual cause for our high incarceration rate is that we have longer sentences. The conviction rate, for example, as compared to the UK is similar.

AI feels closer to a firearm than a hammer when accessing law enforcement's ability to quickly do massive, unrecoverable harm.