Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust it
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
> to minimize other damage
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
Yeah, you also have to consider that your kids can be on either side of the equation too.
I honestly don't know if by the other side of the equation is your kid being on the street when somebody elses's av causes the accident. Bonus points of the owner of the av is not liable for the accident.
We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own death—driving off a bridge, say.
I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.
> You mean deaths to multiple other people, do you not
I mean deaths the AI predicts for other people, yes
And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
I don't believe any AV software out there attempts to solve the trolley problem. It's just not relevant and moreover, actually illegal to have that code in some situations.
You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
> deaths the AI predicts for other people
Isn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
It was an entire media beat up because the media was too afraid to talk about anything real and the public not interested.
There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.
Instead we got "what if the car calculates the trolley problem against you?"
And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).
I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.
Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.
> Which is of course what the road rules are: you slam on the brakes.
Yeah, there are a shocking number of accidents which basically amount to "they tried to swerve and it went badly".
You can concoct a few scenarios where other drivers are violating the road rules so much as to basically be trying to murder you -- the simplest example is "you are stopped at a light and a giant truck is barreling towards you too fast to stop".
If you are a normal driver, you probably learn about this when you wake up in the hospital, but an autonomous vehicle could be watching how fast vehicles are approaching from behind you. There's going to be a wide range of scenarios where it will be clear the truck is not going to stop but there's still time to do something (for instance, a truck going 65mph takes around 5 seconds to stop, so if it's halfway through its stopping distance, you've got around 2.5 seconds to maneuver out of the way).
That does leave you all sorts of room to come up with realistic trolley problems.
> That does leave you all sorts of room to come up with realistic trolley problems
But all require a human (or malicious) driver on one hand. The more rule-following AVs on the road, the fewer the opportunities for such trolley problems.
And I'd still argue that debating these ex ante is, while philosophically fascinating, not a practical discussion. I'm not seeing a case where one would code anything further than collision avoidance and e.g. pre-activating restraints.
Yeah, realistically the problems almost never happen and hopefully become rarer over time.
The typical human preference WRT the trolly problem ("don't take an action which leads to deaths, even if it would save more lives") is also a reasonable -- maybe the only reasonable answer -- to these hypotheticals.
Ie, move against the light to avoid getting rear ended, but not if you're going to run over a pedestrian or cause an accident with another vehicle trying to do so. (Even if getting rear ended would push you into the pedestrian or other car.)
The AI can also only ever predict that you might die. So how should these predictions be weighed? Say there's a group of five children - the car predicts a 90% chance of death for them, vs. 50% for you if the car avoids them. According to your comments, it seems like you'd want the car to choose to hit the children, right?
What is the lowest likelihood of your own death you'd find acceptable in this situation?
> not sure I would trust it
This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
Sure, but what happens when the tech gains market capture and inevitably enshittifies, the same way every other piece of tech has?
I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.
Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.
I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
> what happens when the tech gains market capture
Idk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. I’m willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.
[1] https://www.cdc.gov/nchs/fastats/accidental-injury.htm
What would that guarantee look like and would it be legal to sell a product that made that guarantee?
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
I find it interesting that you don't give other drivers any consideration in your analysis.
Other drivers should take public transit if they don't want to / are afraid to operate their own vehicles
As for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be
No, I mean that they are not prioritizing you and many make poor choices.
Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
They are not afraid to operate their own vehicles. They are afraid you will kill them.
You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
Appreciate the honesty.
Sure, but then I don't want you to have a vehicle at all to minimize my own risk.
Feel free to minimize your own risk by staying home and never leaving
[flagged]