"Now, with LLM-generated content, it’s hard to even build mental models for what might go wrong, because there’s such a long tail of possible errors. An LLM-generated literature review might cite the right people but hallucinate the paper titles. Or the titles and venues might look right, but the authors are wrong."
This is insidious and if humans were doing it they would be fired and/or cancelled on the spot. Yet we continue to rave about how amazing LLMs are!
It's actually a complete reversal on self driving car AI. Humans crash cars and hurt people all the time. AI cars are already much safer drivers than humans. However, we all go nuts when a Waymo runs over a cat, but ignore the fact that humans do that on a daily basis!
Something is really broken in our collective morals and reasoning
> AI cars are already much safer drivers than humans.
I feel this statement should come with a hefty caveat.
"But look at this statistic" you might retort, but I feel the statistics people pose are weighted heavily in the autonomous service's favor.
The frontrunner in autonomous taxis only runs in very specific cities for very specific reasons.
I avoid using them out of a feeble attempt to 'do my part', but I was recently talking to a friend and was surprised that they avoid using these autonomous services because they drive, what would be to a human driver, very strange routes.
I wondered if these unconventional, often longer, routes were also taken in order to stick to well trodden and predictable paths.
"X deaths/injuries per mile" is a useless metric when the autonomous vehicles only drive in specific places and conditions.
To get the true statistic you'd have to filter the human driver statistics to match the autonomous services' data. Things like weather, cities, number of and location of people in the vehicle, and even which streets.
These service providers could do this, they have the data, compute, and engineering to do so, though they are disincentivized to do so as long as everyone keeps parroting their marketing speak for them.
I don't know why that matters? The city selection and routing is a part of the overall autonomous system. People get to where they need to be with fewer deaths and injuries, and that's what matters. I suppose you could normalize to "useful miles driven" to account for longer, safer routes, but even then the statistics are overwhelmingly clear that Waymo is at least an order of safer than human drivers, so a small tweak like that is barely going to matter.
> so a small tweak like that
Well it would seem these autonomous driving service providers disagree with your claim that it is just a 'small tweak' considering they only operate under these specific conditions when it would be to their substantial benefit to instead operate everywhere and at all times.
[dead]
[flagged]
You consider it "sane" to compare the citywide driving statistics of mid winter Buffalo New York with mid summer San Francisco California driving limited to only using Market and Van Ness streets?
> AI cars are already much safer drivers than humans.
Nothing like that was shown. We have a bunch of very "motivated reasoning" kind of studies and best you can conclude from them is that "some circumstances where ai cars are safer drivers exist". The common trick is to compare overall human records with ai car record in super tailored circumstances.
They have potential to be safer drivers one day, if they will be produced by companies that are forced to care about safety by regulations.