The fact that Tesla doesn't have a process for making crash data available to investigators is pretty indefensible IMO, given they're retaining that data for their own analysis. Would be one thing if they didn't save the data for privacy reasons, but if they have it, and there's a valid subpoena, they obviously need to hand it over.

For context though, note that this crash occurred because the driver was speeding, using 2019 autopilot (not FSD) on a city street (where it wasn't designed to be used), bending down to pick up a phone he dropped on the floor, and had his foot on the gas overriding the automatic braking: https://electrek.co/2025/08/01/tesla-tsla-is-found-liable-in... The crash itself was certainly not Tesla's fault, so I'm not sure why they were stonewalling. I think there's a good chance this was just plain old incompetence, not malice.

The article explains that the crash snapshot shows: - hands off wheel - autosteer had the steering wheel despite a geofence flag - no take-over warnings, despite approaching a T intersection at speed

Letting people use autopilot in unsafe conditions is contributory negligence. Given their marketing, that's more than worth 33% of the fault.

That they hid this data tells me everything I need to know about their approach to safety. Although nothing really new considering how publicly deceitful Musk is about his fancy cruise-control.

When you put your foot on the accelerator while on autopilot a warning pops up saying the car will not brake.

From the article, the data Tesla withheld from the court revealed:

* There was no record of a “Take Over Immediately” alert, despite approaching a T-intersection with a stationary vehicle in its path.

* Moore found logs showing Tesla systems were capable of issuing such warnings, but did not in this case.

Everytime you put your foot on the accelerator while the system is engaged a warning pops up saying the car will not brake.

It happens right away and has nothing to do with any other warnings. If you own a Tesla you have seen this warning over and over.

[flagged]

No aircraft manufacturer is misleading the pilots around their automation capabilities.* Pilots have to go through extensive training, including type certification on the plane they will fly. They are acutely aware of limitations. When anything does go wrong, there is a thorough and public postmortem, with legally binding findings and mitigations.

For Tesla, anyone that can read the one sentence description of enabling autopilot (double tap the stalk) can use it in any condition, without any special training, and the company will stonewall any accident investigation. The entire term "full self driving" is just "puffery".

* Yes, MCAS, but this is not an autopilot/autodrive system, and Boeing is in trouble for this.

I mean, to get a driver's licence, you also have to go through an extensive training and pass a test, including a test on the type of car you drive.

If you're involved in a crash, you're also held legally responsible.

The nagging if you're not paying attention during Autopilot or FSD, came a long way from 2019, too.

Why aren't the cruise-control vendors fined with quarter-billion dollar fines for anyone who mistook the marketing of the feature as controlling the cruise, e.g., because the name was misleading?

Cruise control, it means, I just press the button, and relax, right? That's what the sales person told me! Why did the car veer off the highway? Why are they calling it cruise control when it doesn't control the cruise? There should have been a different name for this super-misleading feature!

Hah, comparing the training of an airline pilot to that of a driver. Thanks for the laugh.

I'll give you one thing: it's a great example of the continuum fallacy. I'm definitely going to use it in the future to help explain the fallacy. Thanks a bunch.

Not even slightly comparable to the training pilots receive.

> I mean, to get a driver's licence, you also have to go through an extensive training and pass a test, including a test on the type of car you drive.

Not necessarily. There are still people driving who essentially bought their drivers license without any formal training.

What do you think about the Wikipedia article titled "False equivalence" that describes the logical fallacy?

https://en.wikipedia.org/wiki/False_equivalence

How exactly does it apply to autopilot, though?

The false equivalence is more akin to the "auto" of "autopilot" implying that the driver doesn't have to supervise, yet such connotation is never presented as a fault for the original system for which the name was initially devised for.

> I think there's a good chance this was just plain old incompetence, not malice.

The meme of Hanlon's Razor needs to die. Incompetence from a position of power is malice, period.

As a codendeced variant, yes.

A bit more nuanced version is that incompetence from a position of power is a choice.

That seems contrary to my experience. Large, powerful bureaucracies are often highly incompetent in ways that clearly work against their own interests. If it were merely a choice they wouldn't choose to be incompetent in those ways.

I guess you could go even more nuanced and say sometimes incompetence from a position of power is a choice, and I would agree with that, but now the statement seems so watered down as to be almost meaningless.

I feel like this is getting far too abstract to the point that you’re actively losing sight of a very real, very concrete and very specific set of actions they took which don’t appear to have any credible and innocent motives but also happen to perfectly align with why by all reasonable definitions would be considered malicious.

[deleted]

Large companies in the US, especially at the current moment in history, have huge amounts of power vested in individual executives.

If those executives valued not being incompetent in any specific given way (especially in the ways that harm the many), they have the power to change that. They can say "no, we need to make sure this never happens again."

The fact that they choose not to do that, in so, so many cases, has a variety of causes, but in the end what it fundamentally boils down to is that they choose not to do it.

The point is not to come up with a simple rule that is going to give you the correct answer in every case but to come up with a simple rule that is going to give you the best outcome overall. You need to think about it in game theory terms:

For friends they are unlikely going to be randomly malicious while assuming malice for every mistake is quickly going to ruin your friendship. So Hanlon's razor makes sense.

Corporations on the other hand cannot be assumed to have morals or care about you. You are already fungible to them so assuming malice until proven otherwise is not going to make things worse for you. Meanwhile giving corporations the benefit of the doubt allows the truly malicious ones to take advantage of that who, unlike your friends, don't really have any other feedback loops that keep them honest.

"clearly work against their own interests"

Perhaps, but perhaps there is a bigger set of constraints not visible to an outsider which the "buerocrats" are trying to satisfy.

It is also a question about whose interests we are talking about. Interests of individual decision makers can be contrary to the company as a whole.

We were talking about people, not bureaucracies. You've already argued "they were just following orders" on behalf of those who implement. Do you mean now also to excuse those who originate? Or do you really think anyone is going to believe, in 2025, that there is nowhere the buck stops?

Integrity is for suckers. The system rewards those who play by its rules.

This doesn't acknowledge reality. Tesla has a position of power, but that doesn't mean Tesla is free from incompetence or can ever be free from it.

But Tesla has sufficient power that they do not have the luxury of pleading incompetence when things like this happen.

This is both because such incompetence costs people's lives, and because they have enough money that they could definitely hire more or better people to re-check and add redundant safety features into their products.

The problem is, they do not want any accountability for claiming that their cars are "self-driving", or for any of their other errors or willful endangerment of the public.

Trillion-dollar companies run by egomaniacal billionaires do not need you rushing to your keyboard to make excuses for them.

A corporation can hire people and put processes in place to arbitrarily minimize (or not) the chance of an mistake in areas that matter to them. In this case, they did just that; only the thing being optimized for was “not giving data to the authorities”.

The evidence of this trial does not support an “oopsie poopsie we messed up so sowwy” interpretation of events. Tesla’s paid representatives went out of their way—repeatedly—to lie, mislead, and withhold evidence in order to avoid scrutiny. Fuck them and everyone involved with that.

This comment says more about your relationship with the world than it does about the subject matter

My relationship with the world is one where I want billionaires and companies with unprecedented amounts of power to be held to account when they act maliciously, negligently, or irresponsibly, and not one where we try to find any reason to excuse their bad behavior.

The real question is why don’t you feel the same?

You are arguing with yourself. I said that Tesla isn't free from incompetence despite its power. You started ranting about billionaires.

We're having different conversations.

>I said that Tesla isn't free from incompetence despite its power.

Ok, and what is your point? Do you have a point or was that just a random observation related to nothing and implying nothing? Because it seems like your implication, given the comment you responded to, it's that Telsa's incompetence is not malicious in it's incompetence. That it is just one of those things that the powerful also have incompetence.

But Telsa decides where to put funding and resources, so if they put funding into covering up and hiding data, and they don't put funding into safety systems and testing that their "autopilot" engages and disengages properly, that is malice.

And again, if that is not the implication of your comment, please just let me know what your intent was, and I will correct myself.

Comments like yours are why we're on the slide into fascism with little fanfare. Oops! We accidentally committed treason. No reason to hold us responsible for our actions!

Incompetence can be malice if you are in a position where you are required to be competent - like being licensed to produce cars that share public roads with others.

And in both cases they should be held accountable.

I don't have an excuse why their lawyer did the lawyery stuff, but as far as unlinking of the file being done in software, "destroying evidence", I think the explanation is far more benign.

If you're big on privacy, things like logging incorrect password attempts is a big no-no. We have to "thank" the privacy advocates for that.

How do you think the owner of the car would feel if the file was visible in plain sight to the next owner of the vehicle?

I own a Tesla, and here's my take on the biggest software issue:

Normal consumers don't understand the difference between "Autopilot" and "FSD".

FSD will stop at intersections/lights etc - Autopilot is basically just cruise control and should generally only be used on highways.

They're activated in the same manner (FSD replaces Autopilot if you pay for the upgrade or $99/month subscription), and again for "normal" consumers it's not always entirely clear.

A friend of mine rented a Tesla recently and was in for a surprise when the vehicle did not automatically stop at intersections on Autopilot. He said the previous one he rented had FSD enabled, and he didn't understand the difference.

IMO Tesla just needs to phase out 2019 AP entirely and just give everyone some version of FSD (even if it's limited), or geofence AP to highways only.

Why is that so though? Because of false marketing to the degree that is criminal. Elon does have one excuse: Tesla would be bankrupt several times over except for his purposeful criminal lies. Does he actually care about the company? He pumped out OOM more value from Tesla than anyone in history of any company. That is how much he cares about company surviving. Criminal. And too dum* to think of anything innovative to save the company.

Withholding safety-relevant features unless you pay a subscription sounds like something from dystopian fiction, not something that should be allowed in the real world.

In my experience, even most Tesla owners don't really seem to understand the difference between autopilot or FSD.

However, even though Autopilot doesn't obey traffic control devices, it still DOES issue warnings if taking over may be required.

Most Tesla owners I've talked with, are actually completely unaware of the v12 and v13 improvements to FSD, and generally have the car for other reasons than FSD. So, if anything, Tesla is actually quite behind on marketing FSD to the regular folk, even those who are already Tesla owners.

Not in 2019 it didn't.

That’s not the point. The point is if Teslas marketing led the driver to over estimate the car’s capabilities leading to him engaging in reckless behavior. He admitted on the stand that he was acting careless, that autonomous mode required supervision; however, he also admitted that he thought that the car would drive better than a human and intervene when required _based_ on Tesla’s marketing. When you look at Musk’s tweets and the Paint it Black video the jury agreed that it was not an unreasonable belief that was factually _not_ true and found Tesla 33% guilty of the accident.

The fact that Tesla purposely mislead the investigators and hid evidence was why the jury awarded such a large sum.

> https://electrek.co/2025/08/01/tesla-tsla-is-found-liable-in...

> Update: Tesla’s lawyers sent us the following comment about the verdict:

> Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.

---

Personally, I don't understand how people can possibly be happy with such verdicts.

Recently in 2025, DJI got rid of their geofences as well, because it's the operator's responsibility to control their equipment. IIRC, DJI did have support of the FAA in their actions of removing the geofencing limitations. With FAA expressly confirming that geofencing is not mandated.

These sorts of verdicts that blame the manufacturer for operator errors, are exactly why we can't have nice things.

It's why we get WiFi and 5G radios, and boot loaders, that are binary-locked, with no source code availability, and which cannot be used with BSD or Linux easily, and why it's not possible to override anything anywhere anymore.

Even as a pedestrian, I'm glad that Tesla is fighting the good fight here. Because next thing I know, these courts will cause the phone manufacturers to disable your phone if you're walking next to a highway.

I agree. This hurts competent people who want to have responsibility and the freedom that brings.

> This hurts competent people who want to have responsibility and the freedom that brings.

Perhaps, but does it hurt more or less than getting life-changing injuries and your partner killed by a Tesla?

So, we have to ignore the entire safety record for the entire technology just because one operator has failed to follow the instructions?

This is especially the case for something that was in its infancy back in 2019 when this crash happened.

And you know what we have in 2025 because of those restrictions being enforced since then?

In 2025, Tesla's nag drivers so much, for not paying attention to the road, that drivers no longer keep the much safer versions of autopilot engaged at all, when looking for their phones.

Instead, now, because the issue is "fixed", Tesla drivers simply do the same thing what drivers of any other car do in the situation.

They disable autopilot first, and only then stop paying attention to the road, looking for their phone.

How's that safer?

We're precisely less safe because of these regulatory requirements.

(And, add insult to injury, this court is now using the hindsight 20/20, of these warnings subsequently being implemented, as evidence of Tesla's wrongdoing in 2019, at a time before anything like that was thought to be possible? Even though, now that these warnings were implemented, we already have evidence that these nags themselves make everybody less safe, since autopilot is simply turned off when you need to stop paying attention to the road?)

What safety record are we ignoring? Can you please cite some scientifically rigorous and statistically sound data, evidence, and analysis?

Or are you talking about self-published numbers by the company that is proven to withhold, lie, and misdirect in even official police investigations, subpoenas, and trials where it is actively illegal to do so?

Are we talking numbers with a degree of scientific rigor unfit for publication in a middle school science fair, let alone the minimum standard of scientifically rigorous that members of their team had to achieve to get their degrees, yet somehow fail to do when detailing systems that are literally responsible for the life and death of humans?

So where's your unbiased data then?

Where's your data that these nags make everyone safer, when it's widely known that they simply result in people turning off the entire autopilot/FSD when the operator needs to stop paying attention to the road, to avoid the nags and the penalty strikes?

Where's all the news reports about the crashes without the autopilot engaged? If they were as rare as the autopilot ones, surely we'd have seen some of them covered by the media, right? Or are they so rare that not a single one has happened yet, hence, the lack of any reports being available?

You are the one claiming it has a “safety record”.

You are the one claiming “We're precisely less safe because of these regulatory requirements.”

Support your assertion with scientifically rigorous, statistically sound evidence.

And no, your ignorance of safety problems is not evidence of safety despite your attempts to argue as such. That was not a valid argument when the cigarette companies made it and it is not valid now.

The law is not supposed to make the utilitarian choice of what causes the least net harm to all people, individual rights be damned. That someone died while unchained is not a compelling argument for everyone to be put in irons.

Everyone without a driver's license and liability insurance is in chains with respect to driving a car on on public roadways and it is broadly unanimous opinion that it makes perfect sense. Its a mix of utilitarian grounds, externalities, etc.

You can't build a large dynamite factory in a residential neighborhood either even if you don't intend for it to blow up.

Well yeah, the law isn't "do whatever you want, there are no rules" either. The important thing is getting your license and insurance is purposefully an extremely low bar. Driving is a dangerous activity, someone could hurt or kill themselves or others every time they get behind the wheel, but so long as you have demonstrated the very most basic understanding of what you're doing (ie getting a license), and taken responsibility for the potential consequences of your actions (ie carrying insurance), you're free to take that risk. Note, you still don't have the right to run over pedestrians, but if you do, you will be held liable, not the manufacturer of your car or the state which granted you a license. You would likely lose your license under such a circumstance, but no one else will lose their licenses to mitigate the risk of future pedestrian impacts.

Zoning laws are a complete non sequitur. The issue with building a large dynamite factory in a residential neighborhood is the threat to the people of the neighborhood who not only didn't consent to live near it but specifically chose to live in an area zoned so that such things could not be built. Building a dynamite factory wherever you want is not something you have the innate right to do. That said, you probably can get a permit (assuming you have the proper licenses and insurance) to build a dynamite factory in an appropriately zoned area.

There is no car sold in America which could have prevented this accident.

Yes, that is exactly Tesla's line, and it is entirely beside the point.

The point is the EXPECTATION set by decades of basically flat-out lying about the capabilities of "Autopilot" and "Full Self Driving".

Hell, just the names alone entirely point to no-limitations full self driving. Plus selling $10k upgrades to use your car as a driverless taxi with the next software upgrade, which years later, has never happened. The fine print is BS; the message screams nothing but capable driving.

Add to that the instant data collection and real-time deleting it at the scene, and Tesla is 100% wrong. (and I used to admire Tesla and aspire to own one)

And this is 100% Tesla's own fault. They did NOT have to market it that way. Had they marketed it as "Advanced Driving Assistant" or "Super Lane-Keeping", or something, and left the data available to the driver, they likely could have won this case easily. "The guy wasn't even looking, we never even implied it could take over all driving, what the hell was he thinking?".

Actual "autopilot" such as the kind actual pilots use does not give the pilot the ability to disregard the operation of their airplane.

How does calling a feature "autopilot" then give consumers the impression that they can completely hand over operation of the car. Driving a car is a serious task and this driver was extremely negligent in executing that task.

The "Autopilot" on planes is a lovely technical distinction. Hardly any non-aviator knows the details, and the common idea is that the plane pilots itself in most phases of non-emergency flight. And indeed, some autopilot functions can even auto-land according to the flight plan. But there is a bit of wiggle-room there to say people should expect to pay attention.

Tesla's "Full Self Driving" leaves no such room. The only exception would be to consider Tesla to be lying.

"Full", adjective Containing all that is normal or possible. "a full pail." Complete in every particular. "a full account."

That's just the first DDG search. Go to ANY other dictionary and show me where "Full" in ordinary American English usage means anything other than complete, no exceptions, etc.

"Full Self Driving" literally means "It FULLY drives itself".

It does NOT mean "You and the automobile co-drive the car", or "Mostly self-driving", or "Sometimes self-driving", or "Self driving until it can't figure it out or fcks up", the latter of which seems the most accurate.

If Tesla had called it ANY of those other things, they would be fine. And would be honest.

But instead, Musk and Tesla decided to lie in plain language. And for that, they lost the case, and will likely lose many others.

For some people, honesty matters. And honesty is not splitting hairs over differences between detailed technical meaning vs colloquial understanding ("autopilot"), or using the end-goal a decade+ away as the official name and description of the feature set ("Full Self Driving"). That is dishonest.

Driving a car is not a "colloquial" activity; it is a technical activity. Splitting hairs about the technical operation of a car is exactly what a competent operator should be doing.

Regardless, this car did not have FSD, it had "autopilot".

Driving a car in the USA with a license requires passing about a 5-minute written test about rules of the road and five minutes of a driving test. It is an almost universal event for residents of the country.

Piloting an aircraft with an autopilot requires a MINIMUM of 1500 hours of instruction and experience as well as multiple levels of certification (VFR, IFR, multi-engine, and specific type certification).

You are seriously trying to claim that these are even remotely similar activities?

Yes, drivers SHOULD split hairs over the technical operation of the vehicle.

Should and Is/Are/Do are NOT the same thing. Particularly when the company founder and prime spokesperson brays about how it will do everything for you and constantly overpromises stuff that won't be available for a decade (if ever) as if it were here already

sheesh

> Piloting an aircraft with an autopilot requires a MINIMUM of 1500 hours of instruction and experience as well as multiple levels of certification (VFR, IFR, multi-engine, and specific type certification).

What? A full private pilot license only requires 35 or 40 hours of flight time (depending on school type); a sport license only requires 20 hours. Airplanes flyable by new pilots of either type very often have autopilots today.

But Tesla doesn't have "Full Self Driving" period, they only have "Full Self-Driving (Supervised)" (and, prior to that, it's been FSD Beta).

You can't just keep ignoring the "Supervised" bit as if it's not there. Just because you think it's a stupid name, doesn't make it a lie. Have you even tried it yourself? I've tried v12, and it's amazing. It does fully self-drive. Why would they call it "mostly" if the idea has always been that it'd be "full" when it's done and out of beta? And as robotaxi shows, it's literally almost there.

I've just tried searching "FSD site:tesla.com" in Google Search, and basically every single result is "Full Self-Driving (Supervised)", with very few exceptions.

Hint: what does the word ‘full’ mean?

"Full" means it drives itself as long as you "Supervise".

Did you use v12 or v13 FSD? I've used v12 last year (which is far behind v13, and yet more behind robotaxi). I'd enable it as soon as I'm out of the garage, and it'd safely drive me right to the destination.

How exactly is that not "Full"? Why would they call it anything else when it can drive itself from point A to point B without any interventions most of the time?

>>Why would they call it anything else when it can drive itself from point A to point B without any interventions most of the time

Why?

Because the FACT is that you must put in the caveat MOST OF THE TIME, or someone is likely to die.

If they were honest they would call it "Supervised Mostly Self Driving". Even "Supervised Self Driving — you mostly supervise, not drive!" would be accurate.

Again, go to any dictionary and find the definition of "Full". Websters:

>>1 containing as much or as many as is possible or normal

>>2a complete especially in detail, number, or duration

>>2b lacking restraint, check, or qualification

>>2c having all distinguishing characteristics : enjoying all authorized rights and privileges

>>2d not lacking in any essential : perfect

The problem is Tesla and you are attempting to literally change the definition of "Full" to mean "Not Full".

This is lying or deceiving yourself, and deceiving others.

Recognize facts and that language actually has meaning, and stop being part of the problem. Or continue watching your hero lose lawsuits.

>>Why would they call it "mostly" if the idea has always been that it'd be "full" when it's done and out of beta?

Because "Mostly..." is the truth, and then when it is actually "Full..." they can come out and announce that fact with great fanfare. and they would have been honest.

Hell, if they simply called it "Supervised Self Driving", it would be honest, and actually match even your glowing description.

But they do not. Your and Tesla's idea that using the added tagline "(Supervised)" as a legal weasel-word does not work either. "Full Self-Driving (Supervised)" is literally an oxymoron. A thing either drives itself fully, or it requires supervision. One contradicts the other.

IIRC, the "(Supervised)" bit was added well after the first fanfare with only "Full Self Driving" alone, when problems started to appear. And the common initials are "FSD".

Even if the reality of the feature set meets your glowing description, the problem is the small percentage of cases where it fails. I'm sure the guy in Florida who was decapitated when his Tesla failed to notice a semi-trailer turning across in front of him was similarly confident, and the same for the guy in California who was impaled on a construction traffic barrier. The problem is that it is NOT FULL, it is only full self driving until it fails.

>>And as robotaxi shows, it's literally almost there.

NO, it shows the exact opposite.

Nearly two months after the much-heralded rollout of (fully) self-driving taxis, Tesla still cannot put a single car on the road for a single meter without a supervising safety driver. Moreover, there have been numerous reported instances of the cars making dangerous errors such as left turns into traffic, etc.

>>basically every single result is "Full Self-Driving (Supervised)", with very few exceptions.

Again, that wording is a literally meaningless oxymoron, containing two mutually contradictory statements ("Full" vs "Supervised"), thus open to whatever interpretation the listener latches onto. Moreover, the emphasis is on the first word — "Full" — which is the lie.

I mean, this is just wrong.

Autopilot on a plane does actually drive the plane. You can go as long as hours without any human input requirement. Pilots can eat, go to the bathroom, what have you.

Of course we have two pilots, just in case, but this isn't necessary - some countries are pushing for one pilot because the vast majority of flying is done by the plane.

That doesn't mean that autopilot systems on planes are more sophisticated. It just means that automating a plane is much, much easier than automating a car.

We also have fully autonomous trains.

First of all, I don't condone the unlinking of snapshot_collision_airbag-deployment.tar after the upload, but, OTOH, I can also understand why something like that would be done, too. (If you were an owner of the car, would you want a subsequent owner to have the records of your crash? This is why I hate all these privacy advocates, because they ruin it for everyone.)

If you've ever worked at any company where security and privacy are taken seriously, you'd be fully aware that things like logging incorrect password attempts is a straight up CVE waiting to happen, even though it's something that a legitimate user might as well want to be happening to find out who's trying to break into their system. Thank the privacy advocates.

But I fail to see who exactly is misled by the marketing, ESPECIALLY given the non-stop negative media attention Tesla has always had.

I mean, it's literally already called FSD Supervised, and previously it's been FSD Beta. How exactly is that not self-explanatory?

But if you already a conclusion, the name is irrelevant. I mean, did you ever look at the Autopilot article in Wikipedia? It's about the aircraft system. Why do not we not FAA complaining to Boeing and Airbus that their airplanes have this super-misleading "autopilot", even though the pilots must still be supervising the tech?

Yikes

You entirely miss the distinction between trained professional and ignorant consumer, "industrial use only" equipment and materials vs everyday consumer goods, things that require technical training and even certification to use properly, vs goods sellable to any consumer, prescription drugs vs otc.

The technical goods carry a much higher risk and must be used with specific training and context. Using them outside those contexts is likely to harm or kill people, and creates legal liability.

In contrast, consumer goods must be engineered to be safe in ORDINARY circumstances with UNTRAINED people using them.

Tesla is trying to paper over those differences and use technical terms in a common environment and unleash tools requiring skilled supervision to prevent death into a consumer environment, for profit. You are either failing to make the distinction or consciously going along with it.

decades?

Musk joined Tesla with the Series B round in 2005. Doesn't seem that way, but it's been 20 years.

He first started talking publicly about "Autopilot" or "Full Self Driving" in 2013, so 1.2 would be referred to as plural decades. (I didn't have the exact number on hand, but knew it was 1+, and used the proper form; you prompted me to lookup the proper number)

And would the driver’s actions have been different if they had understood that? Was their lack of understanding coincidence, correlated with their Tesla ownership by no fault of Tesla, or deliberately engineered by Tesla’s marketing approach?

You know, I've talked to a whole bunch of people who actually own Tesla's, who actually work in tech, and most of them are completely unaware about any of these autopilot features whatsoever.

Most people are actually very dismissive of autopilot, and are completely misinformed of the benefits / drawbacks / differences of "FSD" versus "Autopilot".

Most are completely unaware of the improvements of v12 or v13, or differences between HW3 or HW4, or which one they have, or that "autopilot" is free, or circumstances under which autopilot can be used etc.

I talked to some guy last year (mid 2024) who was actually paying $199/mo for FSD v12, before the price drop to $99/mo, and swearing how great it was, yet he has never tried the parking feature, even though it's been released several months prior. He's a software engineer. That's just one example.

So, if anything, Tesla's marketing is nowhere near as successful as these naysayers would make you believe. Because the vast majority of Tesla's own customers are actually far behind on autopilot or FSD buy-in, and are NOT aware of the progress.

An average of 100 people die every day in the US due to traffic accidents, many of which would have been prevented by Tesla-like software. You're obsessing about the wrong side of the equation.

the great thing is that we live in a society controlled by laws and corporations cant get away with testing everything they want on public roads. your freedom or desire for “responsibility” doesn’t negate others’ rights

The right to negligently operate a vehicle?

There's an interesting philosophical debate about the nature of product liability laws. Suppose I'm selling some gadget which when used correctly is safe, and I've taken all reasonable steps possible to try to make it hard for people to use it incorrectly.

Nevertheless people sometimes still manage to use it in an incorrect way that injures their hands such that it will take a year of treatment and physical therapy before they can use their hands again.

Some people view the point of product liability laws is to make those who are to blame for the injury pay. Under that view I'd not be on the hook.

Another point of view is that it should be about who can most efficiently handle dealing with these injuries. Either someone is going to have to pay for the treatment and therapy to enable these people to use their hands again or they are going to probably end up on disability for the rest of their lives which will be paid for by government (and so indirectly by most of the rest of us).

Who should that someone be?

One candidate is the user's health insurance company.

One problem with that is that in the US there are plenty of people without health insurance. They would be able to get free treatment right after the injury at any emergency room if they can't afford to pay for that treatment, but that would only get them to the point they aren't in any more danger. It would not include the follow ups needed to actually restore function, so there is still a good chance they will end up on disability. Also that "free" emergency room treatment will actually be paid for by higher costs for the rest of us.

Even if the user does have insurance that pays for their treatment and therapy, ultimately that money is coming from premiums of the people that use that insurance company.

This health insurance approach then ultimately comes down to socializing the cost among some combination of two broad groups: (1) those who have health insurance, and (2) taxpayers in general.

Another candidate is me, the gadget maker. Make it so I am liable for these injuries regardless of who was at fault. I know exactly how many of these gadgets are out there. If all injury claims from people using them go through me I'll have full data on injury rates and severity.

That puts me in a good position to figure out how much to raise the price of my gadgets to establish and maintain a fund to pay out for the injuries.

This still socializes the costs, but now instead of socializing it across those two broad groups (everyone with health insurance and taxpayers) it is getting socialized across all purchasers of my gadgets.

The people who favor this strict liability approach also argue that I'm the best candidate for this because I'm in the best position to try to reduce injuries. If health insurance companies were the ones dealing with it and they noticed injury rates are going up there isn't really anything they can do about that other than raise premiums to cover it.

If I'm the one dealing with it and notice injury rates are going up I can deal with it the same way--raise prices so my injury fund can cope with the rising injuries. But I also have the option to make changes to the gadgets such as adding extra safety features to reduce injuries. I might come out ahead going that route and then everybody wins.

The fundamental problem here is that the way it's presented caused the driver to trust it in a fashion he should not have. The jury slammed Tesla for overpromising, and for trying to hide the evidence.

> I think there's a good chance this was just plain old incompetence, not malice.

If you or I did this, do you think a judge would care? No. We would be sitting in jail with a significant fine to boot.

The point is that these businesses consider this a "cost of doing business" until someone is actually put in jail.

The article claims that the software should have been geo fensed in that area but Tesla failed to do that, that the software should have trigger warnings of collisions but it did not do that. So there were things Tesla wanted to hide.

I don't necessarily disagree, but I personally find these "but you theoretically could have done even more to prevent this"-type arguments to be a little dubious in cases where the harm was caused primarily by operator negligence.

I do like the idea of incentivizing companies to take all reasonable steps to protect people from shooting themselves in the foot, but what counts as "reasonable" is also pretty subjective, and liability for having a different opinion about what's "reasonable" seems to me to be a little capricious.

For example, the system did have a mechanism for reacting to potential collisions. The vehicle operator overrode it by pushing the gas pedal. But the jury still thinks Tesla is still to blame because they didn't also program an obnoxious alarm to go off in that situation? I suppose that might have been helpful in this particular situation. But exactly how far should they legally have to go in order to not be liable for someone else's stupidity?

>I don't necessarily disagree, but I personally find these "but you theoretically could have done even more to prevent this"-type arguments to be a little dubious in cases where the harm was caused primarily by operator negligence.

The article says that soem government agency demanded Tesla to actually geofense the areas Tesla claims their software is incapable to handle. I am not a Tesla owner and did not read the small fonts manual, do Tesla reserve the rights that they might also not sound the alarm when the car is going at speed straight into an other car while a driver is not having the hands on the wheel? sounds bad, the driver is not steering, the car is driving on an area where it is incapable of driving still and it is heading into a obstacle and the alarm is not sounding (still from the article it seemed like this was a glitch that they were trying to hide, and that this was not supposed to happen)

Anyway Tesla was forced to show the data, they did tried to hide it, so even if fanboys will attempt to put the blame `100% on the driver the jurry and Tesla 's actions tell us that the software did not function as adevertised.

> The fact that Tesla doesn't have a process for making crash data available to investigators

Perhaps hiding the data like this _is_ their process.

> The crash itself was certainly not Tesla's fault

Mixing up who is responsible for driving the car is very much Tesla's fault, starting with their dishonest marketing.

If the driver is literally pressing the gas pedal while not looking at the road and overriding Autopilot then it is very much the drivers fault.

They shouldn't give anything to the cops.

If the driver is 100% liable without autopilot, then they should be held 100% liable with autopilot.

The law should be clear and unambiguous in this regard until we remove the steering wheel entirely.

The penalties for being at fault with auto pilot on should be even higher, since it may as well be just as bad as driving while texting!

No. If they were allowed to just say "We did nothing wrong" to not have to cooperate with an investigation, it would open the door to trivial abuse. It is reasonable to assume they are at least partially to blame. Just like a regular citizen can be arrested when they have a backpack full of money an hour after a bank robbery happened, because it is a reasonable asumption, Tesla should be submitted to an investigation until it becomes clear that the backpack's content is legit.

<<a backpack full of money an hour after a bank robbery happened>

yeah, man, IANAL, but just having money in a backpack isn't probable cause without specific facts tying that backpack to that bank robbery, and I don't think your analogy holds.

How are investigators supposed to determine the "100% liable" without access to all available data? In a typical RTC, police will seek to obtain dashcam footage from other vehicles to determine what happened and then determine liability (more likely the insurance companies or courts).

I realize I might have been overzealous in my need to stress that drivers should be held liable for "auto-pilot", when it's possible that there are situations when a vehicle system truly malfunctions / does not disengage even after driver input.

In those situations, the manufacturer should be held 100% liable, and the NTSB / other authorities would need a way to determine that - probably the same way they determine it when other car's systems fail - like, when some cars' accelerator peddle got jammed.