When Tesla demoed (via video) self-driving in 2016 with a claim "The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself" and then when they unveiled Semi in 2017 - I tweeted out and honestly thought that trucking industry is changed forever and it doesn't make sense to be starting in trucking industry. It's almost end of 2025 and either nothing out of it or just a small part of it panned out.

I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all.

Look at Waymo, not Robotaxi. Waymo is essentially the self driving vision I had as a kid, and ridership is growing exponentially as they expand. It's also very safe if you believe their statistics[0]. I think there's a saying about overestimating stuff in the short term and underestimating stuff in the long term that seems to apply here, though the radiologist narrative was definitely wrong.

[0] https://waymo.com/safety/impact/

Even though the gulf between Waymo and the next runner up is huge, it too isn't quite ready for primetime IMO. Waymos still suffer from erratic behavior at pickup/dropoff, around pedestrians, badly marked roads and generally jam on the brakes at the first sign of any ambiguity. As much as I appreciate the safety-first approach (table stakes really, they'd get their license pulled if they ever caused a fatality) I am frequently frustrated as both a cyclist and driver whenever I have to share a lane with a Waymo. The equivalent of a Waymo radiologist would be a model that has a high false-positive and infinitesimal false-negative rate which would act as a first line of screening and reduce the burden on humans.

I've seen a lot of young people (teens especially) cross active streets or cross in front of Waymos on scooters knowing that they'll stop. I try not to do anything too egregious, but I myself have begun using Waymo's conservative behavior as a good way to merge into ultra high density traffic when I'm in a car, or to cross busy streets when they only have a "yield to pedestrian" crosswalk rather than a full crosswalk. The way you blip a Waymo to pay attention and yield is beginning to move into the intersection, lol.

I always wonder if honking at a Waymo does anything. A Waymo stopped for a (very slow) pickup on a very busy one lane street near me, and it could have pulled out of traffic if it had gone about 100 feet further. The 50-ish year old lady behind it laid on her horn for about 30 seconds. Surreal experience, and I'm still not sure if her honking made a difference.

I like Waymos though. Uber is in trouble.

Simultaneously, Waymo is adopting more human-like behavior like creeping at red lights and cutting in front of timid drivers as it jockeys for position.

I still think that Google isn't capable of scaling a rideshare program because it sucks at interfacing with customers. I suspect that Uber's long-term strategy of "take the money out of investors' and drivers' pockets to capture the market until automation gets there" might still come to fruition (see Austin and Atlanta), just perhaps not with Uber's ownership of the technology.

On the other hand Google has been hard at work trying to make its way into cars via Android automotive so I totally see it resigning to just providing a reference sensor-suite and a car "Operating System" to manufacturers who want a turnkey smart-car with L3 self-driving

>Simultaneously, Waymo is adopting more human-like behavior like creeping at red lights and cutting in front of timid drivers as it jockeys for position.

So before it was a 16yo in a driver's ed car. Now it's an 18yo with a license.

I'm gonna be so proud of them when it does something flagrantly illegal but any "decent driver who gets it" would have done in context.

I honestly don't think we will have a clear answer to this question anytime soon. People will be in their camps and thats that.

Just to clarify, have you ridden in a Waymo? It didn't seem entirely clear if you just experienced living with Waymo or have ridden in it.

I tried it a few times in LA. What an amazing magical experience. I do agree with most of your assertions. It is just a super careful driver but it does not have the full common sense that a driver in a hectic city like LA has. Sometimes you gotta be more 'human' and that means having the intuition to discard the rules in the heat of the moment (ex. being conscious of how cyclists think instead of just blindly following the rules carefully, this is cultural and computers dont do 'culture').

Waymo has replaced my (infrequent) use of Uber/Lyft in 80% of cases ever since they opened to the public via waitlist. The product is pretty good most of the time, I just think the odd long-tail behaviors become a guarantee as you scale up.

You have to consider that the AVs have their every move recorded. Even a human wouldn't drive more aggressively under those circumstances.

Probably what will happen in the longer term is that rules of the road will be slightly different for AVs to allow for their different performance.

> Waymos still suffer from erratic behavior at pickup/dropoff, around pedestrians, badly marked roads and generally jam on the brakes at the first sign of any ambiguity.

As do most of the ridesharing drivers I interact with nowadays, sadly.

The difference is that Waymo has a trajectory that is getting better while human rideshare drivers have a trajectory that is getting worse.

Society accepts that humans make mistakes and considers it unavoidable, but there exists a much higher bar expected of computers/automation/etc. even if a waymo is objectively safer in terms of incidents per miles driven, one fatality makes headlines and adds scrutiny about “was it avoidable?”, whereas humans we just shrug.

I think the theme of this extends to all areas where we are placing technology to make decisions, but also where no human is accountable for the decision.

> Society accepts that humans make mistakes and considers it unavoidable, but there exists a much higher bar expected of computers/automation/etc.

There are a horde of bicyclists and pedestrians who disagree with you and are hoping that automated cars take over because humans are so terrible.

There are a horde of insurance companies who disagree with you and are waiting to throw money to prove their point.

When automated driving gets objectively better than humans, there will be a bunch of groups who actively benefit and will help push it forward.

> there exists a much higher bar expected of computers/automation/etc. even if a waymo is objectively safer in terms of incidents per miles driven, one fatality makes headlines and adds scrutiny about “was it avoidable?”

This doesn’t seem to be happening. One, there are shockingly few fatalities. Two, we’ve sort of accepted the tradeoff.

It does happen:

https://sfist.com/2024/05/14/waymo-now-under-federal-investi...

https://www.reuters.com/legal/litigation/us-closes-probe-int...

Similarly for Cruise: https://www.latimes.com/california/story/2024-05-16/woman-ge...

Sure. It happened. Folks moved on. Car companies are constantly being sued and investigated.

Cruise was outrageous because it fucked up in a way a human never would. (More germane: GM doesn’t have Google’s cash flow.)

Society only cares about the individual and no one else. If Uber/Lyft continue to enshittify with drivers driving garbage broken down cars, drivers with no standards (ie. having just smoked weed) and ever rising rates, eventually people will prefer the Waymos.

I am a long time skeptic of self-driving cars. However, Waymo has changed that for me.

I spend a lot of time as a pedestrian in Austin, and they are far safer than your usual Austin driver, and they also follow the law more often.

I always accept them when I call an Uber as well, and it's been a similar experience as a passenger.

I kinda hate what the Tesla stuff has done, because it makes it easier to dismiss those who are moving more slowly and focusing on safety and trust.

Yeah we don't need to compare robots to the best driver or human, just the average, for an improvement.

However, like railroad safety is expensive heavily regulated, self driving car companies have the same issue.

Decentralized driving decentralizes risk.

so when I have my _own_ robot to do it, it'll be easy and cheap.

> Yeah we don't need to compare robots to the best driver or human, just the average, for an improvement.

Sure, in theory. In practice, nobody is going to give up control on the basis that the machine is "slightly better than average". Those who consider the safety data when making their decision will demand a system that's just as good as the best human drivers in most aspects.

And speaking of Waymo, let's not forget that they only operate in a handful of places. Their safety data doesn't generalize outside of those areas.

> And speaking of Waymo, let's not forget that they only operate in a handful of places. Their safety data doesn't generalize outside of those areas.

Yeah, I'm curious in seeing how they function in environments that get snow.

I agree with both comments here. I wonder what the plausibility of fully autonomous trucking is in the next 10-30 years...

Is there any saying that exists about overestimating stuff in the near term and long term but underestimating stuff in the midterm? Ie flying car dreams in the 50s etc.

I remember Bill Gates said: "We overestimate what we can do in one year and underestimate what we can do in ten years".

Not Musk. He promised full autonomy within 3 years about 10 years ago.

https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

Musk and Gates have very different philosophies.

Gates seems more calm and collected having gone through the trauma of almost losing his empire.

Musk is a loose cannon having never suffered the consequences of his actions (ie. early Gates and Jobs) and so he sometimes gets things right but will eventually crash and burn having not had the fortune of failing and maturing early on in his career(he is now past the midpoint of his career with not enough buffer to recover).

They are both dangerous in their own ways.

If it were about the costs for employees, you could ship it with the railway. That simply isn't the reason.

> ... but underestimating stuff in the midterm? Ie flying car dreams in the 50s etc.

We still don't have flying cars 70 years later, and they don't look any more imminent than they did then. I think the lesson there is more "not every dream eventually gets made a reality".

Waymo is very impressive, but also demonstrates limitations of these systems. Waymo vehicles are still getting caught performing unsafe driving maneuvers, they get stuck alleys in numbers, and responders have trouble getting them to acknowledge restricted areas. I am very supportive of this technology, but also highly skeptical as long as these vehicles are directly causing problems for me personally. Driving is more than a technical challenge, it involves social communication skills that automated vehicles do not yet have.

I've seen a similar quote attribute to Bill Gates;

"We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten."

I think about this quote a lot these days, especially while reading Hacker News. On one hand, AI doesn't seem to be having the productivity and economic impacts that were predicted, but on the other, LLMs are getting gold medals at the Math Olympiads. It's like the ground is shifting beneath our feet, but it's still too slow to be perceptible.

Waymo still have the ability to remotely deal with locations the AI has problems; I'd love to know what type of percentage of trips need to do that now. Having that escape together with only doing tested areas makes their job a LOT easier. (Not that it's bad - it's a great thing and I wish for it here!)

It's limited to a few specific markets though. My bet is they aren't going to be able to roll it out widely easily. Probably need to do years of tests in each location to figure out the nuances of the places.

Yeah, I have no idea if Waymo will ever be a rural thing honestly, mostly for economic reasons. I'm skeptical it would get serious suburban usage this decade too. But for major cities where less than 80% of people own cars, test time doesn't seem to be making a difference. They've been expanding in Austin and Atlanta, seemingly with less prep time than Phoenix and San Fran.

Atlanta seems to be a bit contradictory to some of your other thoughts.

The city itself is relatively small. A vast majority of area population lives distributed across the MSA, and it can create hellish traffic. I remember growing up thinking 1+ hour commutes were just a fact of life for everyone commuting from the suburbs.

Not sure what car ownership looks like, and I haven’t been in years, but I’d imagine it’s still much more than just 20%

Austin is also a car city, everyone has a car there. Public transit in Austin is a joke, and Waymo can't get on the highway so it's only useful for getting back to your hotel from Rainey Street, and maybe back to your dorm from the Drag, but nobody is using Waymo to commute from Round Rock

> Not sure what car ownership looks like, and I haven’t been in years, but I’d imagine it’s still much more than just 20%

I said "less than 80% car ownership", not "80% do not own a car". Technically these are not mutually exclusive but I think you read it as the second one. I haven't really found much analysis about how public transit interfaces with self driving cars honestly.

They keep expanding in places where it doesn't snow.

They've got testing facilities in Detroit ( https://mcity.umich.edu/what-we-do/mcity-test-facility/ ) ... but I want to see it work while it is snowing or after it has snowed in the upper midwest.

https://youtu.be/YvcfpO1k1fc?si=hONzbMEv22jvTLFS - has suggestions that they're starting testing.

If AI driving only works in California, New Mexico, Arizona, and Texas... that's not terribly useful for the rest of the country.

If I were in charge of Waymo, I’d roll out in snowy places last. The odds of a “couldn’t be avoided” accident is much higher in snow/ice. I’d want an abundance of safety data in other places to show that the cars are still safe, and it was the snow instead of the tech that caused the accident.

They're testing in Denver and NYC so its coming.

Define the rest of the country?

If you refer to rural areas, thats 1/7 of the population and ~10% of GDP. They can be tossed aside like they are in other avenues.

Doesn't it snow in NYC?

I could see it taking off in the suburbs/rural areas if they start having a franchise model when it’s more mature.

I saw this timeline a while ago: https://www.reddit.com/r/waymo/s/mSm0E3yYTY that shows their timeline in each city. Shows Atlanta at just over a year. I think once they've handled similar cities it gets easier and easier to add new ones.

Honestly, once a traffic island city (like Singapore) or some other small nation state adopts self driving only within its limits and shows that it is much easier when all are self driving I think the opposition to the change will slowly reduce.

Rain, Snow etc. are still challenges but needs a bold bet in a place that wants to show how futuristic it is. The components are in place (Waymo cars), what is needed is high enough labor cost to justify the adoption.

> a saying about overestimating stuff in the short term and underestimating stuff in the long term

This is exactly what came to my mind also.

well part of the reason why you may have felt mislead by that video is because it was staged so i wouldn't feel that bad.

https://www.reuters.com/technology/tesla-video-promoting-sel...

for me i have been riding in waymos the last year and have been very pleased with the results. i think we WANT this technology to move faster but the some of the challenges at the edges take a lot of time and resources to solve, but not fundamentally unsolvable.

Waymo is a 21 year old company that only operates on a small part of the US after $10 billions of funding.

it's also widely believed that the cars are remotely operated, not autonomous.

they are likely semi autonomous, which is still cool, but I wish they'd be honest about it

They are:

Much like phone-a-friend, when the Waymo vehicle encounters a particular situation on the road, the autonomous driver can reach out to a human fleet response agent for additional information to contextualize its environment. The Waymo Driver does not rely solely on the inputs it receives from the fleet response agent and it is in control of the vehicle at all times. As the Waymo Driver waits for input from fleet response, and even after receiving it, the Waymo Driver continues using available information to inform its decisions. This is important because, given the dynamic conditions on the road, the environment around the car can change, which either remedies the situation or influences how the Waymo Driver should proceed. In fact, the vast majority of such situations are resolved, without assistance, by the Waymo Driver.

https://waymo.com/blog/2024/05/fleet-response/

Although I think they overstate the extent to which the Waymo Driver is capable of independent decisions. So, honest, ish, I guess.

After learning that the Amazon Go store was power by hundreds of people watching video because the AI could not handle it was a real eye opener for me.

Is this why Waymo is slow to expand, not enough remote drivers?

Maybe that is where we need to be focused, better remote driving?

Waymo does not believe that remote drivers are responsive enough to be able to safely operate. Safety drivers communicate with the self-driving system, and can set waypoints etc. for the navigation system, but the delays inherent make it unsafe, is what the Waymo people say publicly at least.

The reason that Waymo is slow to expand is that they have to carefully and extensively LiDAR map every single road of their operating area before they can open up service in an area. Then while operating they simply do a difference algo on what each LiDAR sees at the moment and the truth data they have stored, and boom, anything that can potentially move pops right out. It works, it just takes a lot of prep- and a lot of people to keep on top of things too. For example, while my kid's school was doing construction they refused to drop off in the parking lot, but when the construction ended they became willing. So there must be a human who is monitoring construction zones across the metro area, and marking up on their internal maps when areas are off limits.

> Maybe that is where we need to be focused, better remote driving?

I think maybe we can and should focus on both. Better remote driving can be extended into other equipment operations as well - remote control of excavators and other construction equipment. Imagine road construction, or building projects, being able to be done remotely while we wait for better automation to develop.

This is an interesting idea. What are the expected benefits? Off the top of my head:

* Saves on commute or travel time.

* Job sites no longer need to provide housing for workers.

* Allows the vehicles to stay in operation continuously, currently they shut down for breaks.

* With automation multiple vehicles could be operated at once.

The biggest benefits seem to be in resource extraction but I believe the vehicles there are already highly automated. At least the haul trucks.

Is that true? Nearly everything online argues against that

https://waymo.com/blog/2024/05/fleet-response/

It does, they argue cause they are clueless or have veted interest.

Sometimes both.

[flagged]

No it’s just machine learning was always awesome for the 98% of the cases. We got fooled that we can easily deal with the remaining 2%.

It is the usual complexity rule of software: solving 80% of the problem is usually pretty easy at only takes about 50% of the estimated effort, it is the remaining 20% that takes up the remaining 90% of estimated effort (thus the usual schedule overruns).

The interesting thing is that there are problems for which this rule applies recursively. Of the remaining 20%, most of it is easier than the remaining 20% of what is left.

Most software ships without dealing with that remaining 20%, and largely that is OK; it is not OK for safety critical systems though.

> thought that trucking industry is changed forever

What I find really crazy is that most trains are still driven by humans.

only 2 people (engineer and conductor) for an entire train that is over a mile long seems about right to me though

Much of that is about union power more than tech maturity.

Most work is actually in oversight and getting the train to run when parts fail. When running millions of machines 24/7 there is always a failing part. Also understanding gesticulation humans and running wildlife is not yet (fully) automatable.

How does a human conductor stop a train from hitting a deer? Do they spot it from 3 miles away?

The THSR I rode solved the wildlife problems with a big windshield wiper. Not sure what else there is to do. It’s a train.

At least one train has crashed with fatalities due to hitting a cow.

Can you provide a reference?

That's difficult to believe. Was this a diesel locomotive pulling a freight train or was it something smaller/lighter?

Not your correspondent, but trains are quite easy to derail, because they work by sliding orthogonality over the rail, and because otherwise there are way worse crashes.

The cow might not have caused the fatalities directly, but derailment, and a fast train crashing unbound through the landscape has a lot of kinetic energy.

I decided to do a little searching myself, now that I'm at my PC. In 1984 there was 13 killed from derailment from hitting a cow, but that was a "push-pull" train and it wasn't the locomotive that hit the cow, but a much lighter "control car".[1] Another case where the train hit a cow and launched it through the air, and it struck and killed someone.[2]

Also, in [1] there weren't any cowcatchers on the train either. All the trains (besides possibly an old steam locomotive, IDR) that I've seen in my life have cowcatchers and also a locomotive in the lead.

[1] https://mx-schroeder.medium.com/unholy-cow-the-1984-polmont-... [2] https://morningsidekick.com/indian-man-killed-in-freak-train...

The goto is to scare the animal or human away with the horn.

I think it's the matter of scale. Way more truck drivers than locomotive engineers.

Where we're too optimistic is with technology that demos impressively, but which has 10,000 potentially-fatal edge cases. Self-driving cars and radiology interpretation are both in this category.

When there are relatively few dangerous edge cases, technology often works better than we expect. TikTok's recommendation algorithm and Shazam are in this category.

A lot of people in the industry really underestimated the difficulty in getting self driving cars be effective. It is relatively easy to get a staged demo together, but getting a trustworthy product out there is really hard.

We've seen this with all of the players, with many dropping out due to the challenges.

Having said that, there are several that are fielded right now, with varying degrees of autonomy. Obviously Waymo has been operating in small-ish geofences for a while, but they are managed >200% annual growth readily. Zoox just started offering fully autonomous drives in Vegas.

And even Tesla is offering a service, albeit with safety monitors/drivers. Tesla Semi isn't autonomous at all, but appears ready to go into volume production next year too.

Your prediction will look a lot better by 2030.

This story (the demand for Radiologists) really shows a very important thing about AI: It's great when it has training data, and bad at weird edge cases.

Gee, seems like about the worst fucking thing in the world for diagnostics if you ask me, but what do I know, my degree is in sandwiches and pudding.

This is such a stereotypical SF / US based perspective.

Easy to forget the rest of the world does not and never has ticked this way.

Don't get me wrong, optimism and thinking of the future are great qualities we direly need in this world on the one hand.

On the other, you can't outsmart physics.

We've conquered the purely digital realm in the past 20 years.

We're already in the early years of the next phase were the digital will become ever more multi-modal and make more inroads into the physical world.

So many people bring an old mindset to a new context, where maring of errors, cost of mistakes or optimizing the last 20% of a process is just so vastly different than a bit of HTML, JS and backend infra.

Things happen slowly, then all at once. Many people think ChatGPT appeared out of nowhere a couple of years ago. In reality it was steadily improving for 8 years. Before then, LLMs were being developed for Word2Vec. Before then, Yoshua Bengio and colleagues proposed the first neural probabilistic language model, and introducing distributed word representations (precursors to embeddings). Before then we had Statistical NLP took hold, with n-gram models, hidden Markov models, and later phrase-based machine translation. Before that we had work on natural language processing (NLP) which began with symbolic AI and rule-based systems (e.g., ELIZA, 1966).

These are all stepping stones, and eventually the technology is mature enough to productise. You would be shocked by how good Tesla FSD is right now. It can easily take you on a cross country trip with almost zero human interactions.

It's almost end of 2025 and either nothing out of it or just a small part of it panned out.

The truck part seems closer than the car part.

There are several driverless semis running between Dallas, Houston, and San Antonio every day. Fully driverless. No human in the cab at all.

Though, trucking is an easier to solve problem since the routes are known, the roads are wide, and in the event of a closure, someone can navigate the detour remotely.

I realized long ago that full unattended self driving requires AGI. I think Elon finally figured that out. So now LLMs are going to evolve into AGI any moment. Um no. Tesla (and others) have effectively been working on AGI for 10 years with no luck

> I realized long ago that full unattended self driving requires AGI.

Not even close.

The vast majority of people have a small number of local routes completely memorized and do station keeping in between on the big freeways.

You can see this when signage changes on some local route and absolute chaos ensues until all the locals re-memorize the route.

Once Waymo has memorized all those local routes (admittedly a big task), it's done.

> I realized long ago that full unattended self driving requires AGI.

Yikes.

I recommend you take some introductory courses on AI and theory of computation.

You should either elaborate on your argument, or at least provide further reading that clarifies your point of contention. This kind of low effort nerd-sniping contributes nothing.

Responding to ridiculous uncited wild comments doesn't require a phd thesis paper, my friend.

GP's statement is completely unsupported, the burden is on them.

It's commonly brought up saying, and I don't think it's too far from the truth.

Driving under every condition requires a very deep level of understanding of the word. Sure, you can get to like 60% by a simple robot vacuum logic, and to like 90% with what e.g. Waymo does. But the remaining 10% is crazy complex.

What about a plastic bag floating around on a highway? The car can see it, but is it an obstacle to avoid? Should it slam the brakes? And there are a bunch of other extreme examples (what about a hilly road on a Greek island where people just honk to notify the other side that they are coming, without seeing them?)

That comment isn't going to age well.

> I realized long ago that full unattended self driving requires AGI.

You can do 99% of it without AGI, but you do need it for the last 1%.

Unfortunately, the same is true for AGI.

[deleted]

So waymo has AGI?

They deliberately (and smartly) set their working limits to what they can solve - known city, always decent weather conditions. And they still added a way for a remote operator to solve certain situations.

So no, they don't have AGI and there is a lot to reach "working under every condition everywhere" levels of self-driving.

For trucking I think self driving can be, in the short term, an opportunity for owner-operators. An owner-operator of a conventional truck can only drive one truck at a time, but you could have multiple self driving trucks in a convoy led by a truck manned by the owner-operator. And there might be an even greater opportunity for this in Europe thanks to the low capacity of European freight rail compared to North America.

I used to think this sort of thing too. Then a few years ago I worked with a SWE who had experience in the trucking industry. His take was that most trucking companies are too small scale to benefit from this. The median trucking operation is basically run by the owner's wife in a notebook or spreadsheet- and so their ability to get the benefits of leader/follower mileage like that just doesn't exist. He thought that maybe the very largest operators- Walmart and Amazon- could benefit from this, but he thought that no one else could.

This was why he went into industrial robotics instead, where it was clear that the finances could work out today.

Yeah, I guess the addressable market of “truck owners who can afford to buy another truck but not hire another driver” might be smaller than I thought.

Trucks are harder. The weight changes a lot, they are off grid for huge stretches, mistakes are more consequential.

"I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all."

Who is "we"? The people who hype "AI"?

It's also like nobody learns from the previous hype cycles. Short term overly optimistic predications followed by disillusionment and then long term benefits which deliver on some of the early promises.

For some reason, enthusiasts always think this time is different.

>> it doesn't make sense to be starting in trucking industry

Still true as work conditions are harsh, schedule as well, responsibilities and fines are high but payment is not.

Waymo has worked out. I’ve taken one so many times now I don’t even think about it. If Waymo can pull this off in NYC I believe it will absolutely be capable of long distance trucking not that far in the future.

Trucks are orders of magnitude more dangerous. I wouldn’t be surprised if Waymo is decades away from being able to operate a long haul truck on the open interstate.

Given that Aurora Innovation is running driverless semi-trucks on commercial routes between Dallas, Houston, and Texas as of right now, that would be surprising, yes, but for different reasons.

Aurora also runs with a CDL licensed safety driver in the driver’s seat and they operate only in very carefully planned routes in restricted conditions.

Meanwhile, it’s my feeling that technology is moving insanely fast but people are just impatient. You move the bar and the expectations move with it. I think part of the problem is that the market rewards execs who set expectations beyond reality. If the market was better at rewarding outcomes not promises, you’d see more reasonable product pitches.

How have expectations moved on self driving cars? Yes, we're finally getting there, but adoption is still tiny relative to the population and the cars that work best (Waymo) are still humongously expensive + not available for consumer purchase.

[deleted]

Fundamental change does indeed happen very slowly. But it does happen.

80/20 rule might be 99/1 for AI.

The universe has a way with being disappointing. This isn't to say that life is terrible and we should have no optimism. Rather, that things generally work out for the better, but usually not in the way we'd prefer them to.

It's not about optimism. It is well established in the industry that Tesla's hardware-stack gives them 98% accuracy at the very most. But those voices are drowned by the marketing bravado.

In the case of Musk it has worked out. His lies have earned him a fortune and now he asks Tesla to pay him out with a casual 1 trillion paycheck.

What does ‘accuracy’ mean here?

To correctly assess the state of the world. Since Tesla exclusively uses visual sensors, they are massively limited in how accurate it can ever be… or safe.

But hey, costs are lower that way.