Without even looking at the AI part, I have a single question: Did anybody investigate? That's it.

Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".

But this is the worst part of the story:

> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”

That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.

[0]: https://www.theregister.com/2021/05/29/apple_sis_lawsuit/

[1]: https://news.ycombinator.com/item?id=23628394

The thing about the legal system is there's no incentive to investigate to find the truth.

The incentive is to prosecte and prove the charges.

Speaking from the experience of being falsely accused after calling 911 to stop a drunk woman from driving.

The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.

There needs to be consequences for shitty, procedure-ignoring police work. Period.

Minimum 1 year of jail time for grossly wrongful arrests that could be avoided with standard procedure or investigation tactics that were not applied.

> The thing about the legal system is there's no incentive to investigate to find the truth.

The truth is much more complicated and involves politics. For example Seattle (and possibly other cities?) enacted a law that involves paying damages for being wrong in the event of bringing certain types of charges. But that has resulted in some widely publicized examples where the prosecutor erred by being overly cautious.

I would absolutely never call the police on a woman. Simply walk far away and let her be someone else's problem.

Yes, of course someone should have investigated, but the larger point here is that people don’t because they are being sold a false narrative that AI is infallible and can do anything.

We could sit here all day arguing “you should always validate the results”, but even on HN there are people loudly advocating that you don’t need to.

I don't think people on HN think "AI is infallible", I think people on HN believe HN is sufficient enough for "most tasks". In the context of HN "most tasks" refers to programming tasks, not arresting and jailing people tasks.

You should always validate the results, but there is an inherint difference between an AI generated tool for personal use and a tool which could be used to destroy someones life.

Where are you seeing people being told that AI is infallible? AI is being hyped to the moon, but "infallible" is not one of the claims.

To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.

The models all have disclaimers that state the inverse. People just gradually lose sight of that.

[1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.

> To the extent people trust AI to be infallible, it's just laziness and rapport (…) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.

Why it happens is secondary to the fact that it does.

> The models all have disclaimers that state the inverse. People just gradually lose sight of that.

Those disclaimers are barely effective (if at all), and everyone knows that. Including the ones putting them there.

https://www.youtube.com/watch?v=Xj4aRhHJOWU

> Where are you seeing people being told that AI is infallible? AI is being hyped to the moon, but "infallible" is not one of the claims.

I see all kinds of people being told that AI-based AI detection software used for detecting AI in writing is infallible!

You want to make sure people aren't using fallible AI? Use our AI to detect AI? What could possibly go wrong.

Where did you see this claim about AI-based AI detection?

We can barely convince powers thar be that eye-witness testimony is unreliable, after all.

I think you missed many important points.

"The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.

That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."

This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?

It was worse than that, the reporting from an earlier story[0]

  ...Unable to pay her bills from jail, she lost her home, her car and even her dog.
There is not a jury in the country that will side against the woman. I am not even sure who will make the best pop culture mashup - John Wick or a country song writer?

(Also, what happened to journalism - no Oxford comma?)

[0] https://news.ycombinator.com/item?id=47356968

As an aside AP Style is not use an Oxford comma, and that's been the rule for 50+ years https://www.prnewsonline.com/explainer-how-to-use-oxford-com...

This is upsetting.

Yes, finding out how badly wrong you were is never fun. Of course the lack of ubiquitous Oxford comma use is itself and separately displeasing.

AP Style is simply wrong on this, then.

You have more faith in the country than I do.

Indeed let out on Christmas Eve with no money 1000 miles from your homeland.

Where your home was lost to foreclosure because one JUDGE did not look at the paperwork.

There should be a way to personally sue somebody when they don't do their job. Protecting the innocent. The JUDGE failed badly here.

Flimsy evidence would mean no warrant. Do your basic investigation please... Rubberstamping JUDGE caused this.

Why are they not named? Like they are a spectator. Infact they are the cause.

TBF isn't it rather unreasonable that our system permits your home to be foreclosed while you're detained prior to a hearing?

Also rather unreasonable to arrest someone who is clearly neither violent nor a flight risk. You could literally hold the trial via video conference at that point and there would be no downside.

anyone in the chain of responsibility should be punished so severely that they will be still crying about it in 2030

The real problem here is she'll get money, who knows how much, but that ultimately does nothing to actually address the problems in the system.

Effectively it just raises taxes to cover the cost of these failed prosecutions.

Everytime one of these cases happens, a cop and a prosecutor should be out of a job permanently. Possibly even jailed. The false arrest should lose the cop their job and get them blacklisted, the prosecution should lose the prosecutor's right to practice law.

And if the police union doesn't like that and decides to strike, every one of those cops should simply be fired. Much like we did to the ATC. We'd be better off hiring untrained civilians as cops than to keep propping up this system of warrior cops abusing the citizens.

> Whether it's AI that flagged her

It absolutely was. There's no question of this. Now we need to ask how was the system marketed, what did the police pay for it, how were they trained to use it?

> anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm.

Legally that amounts "hearsay" and cannot have any value. Those statements probably won't even be admissible in court without other supporting facts entered in first.

> we are all guilty until cleared.

This is not at a phenomenon that started with AI. If you scratch the surface, even slightly, you'll find that this is a common strategy used against defendants who are perceived as not being financially or logistically capable of defending themselves.

We have a private prison industry. The line between these two outcomes is very short.

>Legally that amounts "hearsay" and cannot have any value.

How is that hearsay if she's directly testifying to her own whereabouts?

Hearsay would be if someone else was testifying "she was in X location on july 10th between 3 and 4pm", without the accused being available for cross

No!

"I was at the library" is firsthand testimony.

"I saw her at the library" is firsthand testimony.

"I saw her library card in her pocket" is firsthand testimony.

"She was at the library - Bob told me so" is hearsay. Just look at the word - "hear say". Hearsay is testifying about events where your knowledge does not come from your own firsthand observations of the event itself.

IANAL but AFAIK custodial interrogation triggers Miranda, lawyers, and those awful awful civil liberties we’re trying to get rid of.

Better just to apply Musk or Altman software to the problem and avoid it entirely.