To be honest I think the true story here is:

> the fleet has traveled approximately 500,000 miles

Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.

That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.

One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.

The more I think about your comment on statistics, the more I change my mind.

At first, I think you’re right - these are (thankfully) rare events. And because of this, the accident rate is Poisson distributed. At this low of a rate, it’s really hard to know what the true average is, so we do really need more time/miles to know how good/bad the Teslas are performing. I also suspect they are getting safer over time, but again… more data required. But, we do have the statistical models to work with these rare events.

But then I think about your comment about it only being 30 cars operating over 6 months. Which, makes sense, except for the fact that it’s not like having a fleet of individual drivers. These robotaxis should all be running the same software, so it’s statistically more like one person driving 500,000 miles. This is a lot of miles! I’ve been driving for over 30 years and I don’t think I’ve driven that many miles. This should be enough data for a comparison.

If we are comparing the Tesla accident rate to people in a consistent manner (accident classification), it’s a valid comparison. So, I think the way this works out is: given an accident rate of 1/500000, we could expect a human to have 9 accidents over the same miles with a probability of ~ 1 x 10^-6. (Never do live math on the internet, but I think this is about right).

Hopefully they will get better.

500,000 / 30 years is ~16,667mi/yr. While its a bit above the US average, its not incredibly so. Tons of normal commuters will have driven more than that many miles in 30 years.

That’s not quite the point. I’m a bit of an outlier, I don’t drive much daily, but make long trips fairly often. The point with focusing on 500,000 miles is that that should be enough of an observation period to be able to make some comparisons. The parent comment was making it seem like that was too low. Putting it into context of how much I’ve driven makes me think that 500,000 miles is enough to make a valid comparison.

But that's the thing, in many ways it is a pretty low number. Its less than the number of miles a single average US commuter will have driven in their working years. So in some ways its like trying to draw lifetime crash statistics but only looking at a single person in your study.

Its also kind of telling that despite supposedly having this tech ready to go for years they've only bothered rolling out a few cars which are still supervised. If this tech was really ready for prime time wouldn't they have driven more than 500,000mi in six months? If they were really confident in the safety of their systems, wouldn't they have expanded this greatly?

I mean, FFS, they don't even trust their own cars to be unsupervised in the Las Vegas Loop. An enclosed, well-lit, single-lane, private access loop and they can't even automate that reliably enough.

Waymo is already doing over 250,000 weekly trips.[0] The trips average ~4mi each. With those numbers, Waymo is doing 1 million miles a week. Every week, Waymo is doing twice as many miles unsupervised than Tesla's robotaxi has done supervised in six months.

[0] https://waymo.com/sustainability/

Wait, so your argument is there's only 9 crashes so we should wait until there's possibly 9,000 crashes to make an assessment? That's crazy dangerous.

At least 3 of them sound dangerous already, and it's on Tesla to convince us they're safe. It could be a statistical anomaly so far, but hovering at 9x the alternative doesn't provide confidence.

No, my argument is you shouldn't draw a statistical conclusion with this data. That's all. I'm kind of pushing in the direction you were pointing in the second part - it's not enough data to make statistical inferences. We should examine each incident, identify the root cause and come to a conclusion as to whether that means the system is not fit for purpose. I just don't think the statistics are useful.

> The real take away is that the Robotaxis don't really exist

More accurately, the real takeaway is that Tesla's robo-taxis don't really exist.

Because it is fraud trying to inflate Tesla stock price.

The real term is “marketing puffery.” It’s a fun, legally specific way to describe a company bullshitting to hype its product.

Puffery is like saying this is the best canned chili ever made. Selling a can of chili but never actually giving the chili just an empty can is fraud.

Puffery should really be limited to subjective things like flavor and not self-driving cars.

The Robotaxi service might be puffery, selling "full self driving" is just fraud.

A robotaxi rollout where each car needs a safety driver is fraud.

What's even more unbelievable is that a significant number of people are still falling for it

We've known for a long time now that their "robotaxi" fleet in Austin is about 30-50 vehicles. It started off much lower and has grown to about 50 today. There's actually a community project to track individual vehicles that has more exact figures.

Currently it's at 58 unique vehicles (based on license plates) with about 22 that haven't been seen in over a month

https://robotaxitracker.com/

No, they exist, but they are called Waymo

But deep learning is also about statistics.

So if the crash statistics are insufficient, then we cannot trust the deep learning.

I suspect Tesla claims they do the deep learning on sensor data from their entire fleet of cars sold, not just the robotaxis.

>One crash in this context is going to just completely blow out their statistics.

One crash in 500,000 miles would merely put them on par with a human driver.

One crash every 50,000 miles would be more like having my sister behind the wheel.

I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!

If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.

> One crash in 500,000 miles would merely put them on par with a human driver.

> One crash every 50,000 miles would be more like having my sister behind the wheel.

I'm not sure if that leads to the conclusion that you want it to.

From the tone, it seems that the poster's sister is a particularly bad driver (or at least they believe her to be). While having an autonomous car that can drive as well as even a bad human driver is definitely a major accomplishment technologically, we all know that threshold was passed a long time ago. However, if Tesla's robotaxis (with human monitors on board, let's not forget - these are not fully autonomous cars like Waymo's!) are at best as good as some of the worse human drivers, then they have no business being allowed on public roads. Remember that human drivers can also lose their license if [caught] driving too poorly.

> Remember that human drivers can also lose their license if [caught] driving too poorly.

Thank you, yes, I could have said that better. But yeah as a new human driver if I’m too sloppy and get into too many incidents , the penalty is harsh and I’d say that “none of the autonomous companies are held to the same standard” but that’s not 100% true: we do have cities and states refusing to play ball or issue permits here.

But that’s exactly right. my opinion was/is that there should be a probationary period for the first year of new autonomous technology — or major deviations from existing and proven technologies — too. And if it causes too many accidents or violations, then, it should be held to the same standard I am

It does. She just ran over a bus shelter, like she was vibe driving a Tesla on autopilot or something.

[flagged]

They might have forgotten how to share an anecdote and their sister might just be a regular awful driver

[dead]