Robotaxi supevision is just an emergency brake switch.

Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.

Nah the relevant factor, which has been obvious to anyone who cared to think about this stuff honestly for years, is that Tesla's safety claims on FSD are meaningless.

Accident rates under traditional cruise control are also extremely below average.

Why?

Because people use cruise control (and FSD) under specific conditions. Namely: good ones! Ones where accidents already happen at a way below-average rate!

Tesla has always been able to publish the data required to really understand performance, which would be normalized by age of vehicle and driving conditions. But they have not, for reasons that have always been obvious but are absolutely undeniable now.

Yup, after getting a Tesla with a free FSD trial period, it was obviously a death trap if used in any kind of slightly complex situation (like the highway on-ramp that was under construction for a year).

At least once every few days, it would do something extremely dangerous, like try to drive straight into a concrete median at 40mph.

The way I describe it is: yeah, it’s self-driving and doesn’t quite require the full attention of normal driving, but it still requires the same amount of attention as supervising a teenager in the first week of their learning permit.

If Tesla were serious about FSD safety claims, they would release data on driver interventions per mile.

Also, the language when turning on FSD in vehicle is just insulting—the whole thing about how if it were an iPhone app but shucks the lawyers are just so silly and conservative we have to call it beta.

> the same amount of attention as supervising a teenager in the first week of their learning permit.

Yikes! I’d be a nervous wreck after just a couple of days.

You learn when it’s good and bad. It definitely has a “personality”. It is awesome in certain situations, like bumper to bumper traffic.

I kept it for a couple months after the trial, but canceled because the situations it’s good at aren’t the situations I usually face when driving.

The most basic adaptive cruise control is "awesome" in bumper to bumper traffic.

Also, if it actually worked, Tesla's marketing would literally never shut up about it because they have a working fully self-driving car. That would be the first, second, and third bullet point in all their marketing, and they would be right to do that. It's an incredible feature differentiator from all their competition.

The only problem is, it doesn't work.

More importantly, we would have independent researchers looking at the data and commenting. I know this data exists, but I've never seen anyone who has the data and ability to understand it who doesn't also have a conflict of interest.

If it actually worked, Tesla would include an indemnity clause for all accidents while it’s active.

> Robotaxi supevision is just an emergency brake switch

That was the case when they first started the trial in Austin. The employee in the car was a safety monitor sitting in the front passenger seat with an emergency brake button.

Later, when they started expanding the service area to include highways they moved them to the driver seat on those trips so that they can completely take over if something unsafe is happening.

Interesting.

I wonder if these newly-reported crashes happened with the employee positioned in e-brake or in co-pilot mode.

Humans are extremely bad at vigilance when nothing interesting is happening. Lookout is a life critical role on the railways you might be assigned as a track worker where your whole job is to watch for railway trains and alert your co-workers when one is coming, so they retreat to a safe position while it passes. That seems easy, and these are typically close friends, you work with them every day rotating roles, you'd certainly not want them injured or killed - but it turns out it's basically impossible to stay vigilant for more than an hour or two tops. Having insisted that you aren't tired, since you're just stood somewhere watching while your mates are working hard on the track, you nevertheless lose focus and oops, a train passes without your conscious awareness and your colleague dies or has a life-changing injury.

This is awkward for any technologies where we've made it boring but not safe and so the humans must still supervise but we've made their job harder. Waymo understood that this is not a place worth getting to.

> Humans are extremely bad at vigilance when nothing interesting is happening

It would be interesting to try training a non-human animal for this. It would probably not work for learning things like rules of the road, but it might work for collision avoidance.

I know of at least two relevant experiments that suggest it might be possible.

1. During WWII when the US was willing to considered nearly anything that might win the war (short of totally insane occult or crackpot theories that the Nazis wasted money on) they sponsored a project by B.F. Skinner to investigate using pigeons to guide bombs.

Skinner was able to train pigeons to look at an image projected on a screen that showed multiple boats, a mix of US and Japanese boats, and move their heads in a harness that would steer a falling bomb to a Japanese boat. They never actually deployed this, but they had tests in a simulator and the pigeons did a great job.

2. I can't give a cite for this one, because I read it in a textbook over 40 years ago. A researcher trained pigeons to watch some parts coming off an assembly line, and if they had any visible defects peck a switch.

There were a couple really clever things about this. To train an animal to do this you have to initially frequently reward them when they are right. When they have learned the desired behavior you can then start rewarding them less frequently and they will maintain the behavior. You will have to keep occasionally rewarding correct behavior though to keep the behavior from eventually going away.

The way they handled this ongoing occasion reward was to use groups of 3 pigeons. The part rejection system was modified to go with a majority vote. Whenever it was not unanimous the 2 pigeons in the majority got a reward. This happened frequently enough to keep the behavior from going extinct in the birds, but infrequently enough to avoid fat pigeons.

Once they had 3 pigeons trained by a human deciding on the rewards during the initial training when you need frequent rewards and got them so they were working great on the line, they could use those 3 to train more. They did that by adding the trainee as a 4th member of the group. The trainee's vote was not counted, but if the other 3 were unanimous and the trainee agreed the trainee was rewarded. This produced the frequent rewards needed to establish the behavior.

The groups of 3 pigeons could do this all day with an error rate orders of magnitude lower than the error rate of the human part inspector. The human was good at the start of a shift, but rapidly get worse after as their shift goes on.

Ultimately the company that had let the researchers try this decided not to actually have it used in production. They felt that no matter how much better the pigeons did and how much they publicly documented that fact ads from competitors about how that company is using birds to inspect their parts would cost too many sales.

> And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.

Seems like there's zero benefit to this, then. Being required to pay attention, but actually having nothing (ie, driving) to keep my engaged seems like the worst of both worlds. Your attention would constantly be drifting.

Similarly Tesla using Teleoperators for their Optimus robots is a safety fake for robots that are not autonomous either. They are constantly trying to cover there inability to make autonomous anything. Cheap lidars or radar would have likely prevented those "hitting stationary objects" accidents. Just because the Furher says it does not make it so.

They had supervisors in the passenger seat for a whole but moved them back to the drivers seat, then moved some out to chase cars. In the ones where they are in driver seat they were able to take over the wheel weren't they?

So the trillion dollar company deployed 1 ton robots in unconstrained public spaces with inadequate safety data and chose to use objectively dangerous and unsafe testing protocols that objectively heightened risk to the public to meet marketing goals? That is worse and would generally be considered utterly depraved self-enrichment.

We also dump chemicals into the water, air, and soil that aren't great for us.

Externalized risks and costs are essential for many business to operate. It isn't great, but it's true. Our lives are possible because of externalized costs.

EU has one good regulation ... if safety can be engineered in it must be.

OSAH also has regulation to mitigate risk ... tag and lock out.

Both mitigate external risks. Good regulation mitigates known risk factors ... unknown take time to learn about.

Apollo program learned this when the door locks were bolted on and the pure oxygen environment burned everyone alive inside. Safety first became the base of decision making.

Yes, those are bad as well. Are you seriously taking as your moral foundation that we need to poison the water supply to ensure executives get their bonuses? Is that somehow not utterly depraved self-enrichment?

That just makes the Robotaxi even more irresponsible.

I think they were so used to defending Autopilot that they got confused.