Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report
Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report
I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.
SAE level 2 is just a bad idea. People can't be expected to carefully monitor a car and take over at a moment's notice when it's doing all the driving. My adaptive cruise control is great and I hope to have a future car where I can zone out while it drives and take over after after a few seconds heads up, but the zone between shouldn't be a valid feature.
I think you mean SAE Level 3. SAE Level 2 is “lane centering” and “adaptive cruise control” [1]. (Level 3 is “when the feature requests, you must drive.)
[1] https://www.ncdd.com/images/blog/diagram.png
Interestingly, I think that similar types of arguments are made against "agentic coding"
If you don't pay constant attention, you will never notice when it slips in a bug or security issue
Sure, but you can do that in a diff after the event, rather than live.
Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.
Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.
Reminds me of a situation not long ago.
I’m in left lane on highway. Tesla ahead of me but quite a ways away.
I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.
So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.
To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
One of my major complaints about FSD is the 'speed profiles'. You used to be able to set a target speed directly. Now, you can only select a profile. You're either going the exact speed limit, 2-3mph over, or essentially 'with the flow of traffic' which can lead to speeding +15 over the limit.
Didn't know about that feature. Thanks for the illumination. On verge of going full electric and looking at BMW, Lucid, Porsche, Rivian, Tesla.
I wonder what's taught to new drivers about this sort of situation. My intuitive feeling (driving for almost 30 years) is you drive with the flow of traffic when traffic is present. I don't see too many left lane drivers glued to speed limits, but it's obvious when someone is a fast or slow.
It's worth noting that older Tesla's pre-2024, are stuck on an old version of FSD due to compute limitations. Recent FSD, generally, does not hang out in the left lane and is very good at recognizing when vehicles approach from the rear. It will move to the right lane to allow them to pass.
Excellent -- noted.
Real question, then, from someone who only bothers driving when he must and even then in a 2016 model: Why do you use it? What beneficial purpose do you find it to serve?
I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
Here's a simple example from last week. FSD was in control on my way to work, stopped at a red light early in the morning before the sun was up. The light turns green and FSD does not accelerate. I figured it was somehow confused and I was starting to move toward hitting the accelerator myself when a car comes flying through the red light from the driver's side. I hadn't noticed this car, but FSD saw it and recognized it wasn't slowing down. I could see there were headlights, but it wasn't clear how fast it was going.
It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
Glad you're ok!
I was watching the Tesla display on my way back home from LaGuardia airport last week (passenger, not driver).
No accidents or close calls, but it was obvious that I might be focused on 1 or 2 things in that very busy and chaotic environment whereas the car (FSD or otherwise) sees more than 2 things and possibly avoids something on my behalf.
Huh.
I appreciate your thoughtful and detailed response. I'll need to think about it for a while, too. It had not occurred to me to consider the possibility that someone else's FSD might protect me from the general incompetence and unreliability of amateur motor vehicle operators.
(Jumping a light in the dark? Not thinking or learning to navigate by verbal instructions from your satnav or phone, instead of compromising the primary sense you must constantly use to drive without risking manslaughter? I'm sorry, but if this is the standard, I really can't describe it other than it is...to say nothing of your considering safety less important, as you say, in the "inner city" that is my home.)
> Jumping a light in the dark?
I don't know what this means.
> Not thinking or learning to navigate by verbal instructions from your satnav or phone, instead of compromising the primary sense you must constantly use to drive without risking manslaughter?
Navigating involves reading street signs, block numbers, and traffic markings. These are all visual elements that can distract from safety monitoring. How many minor accidents result from driver's trying to figure out where they are, or need to go?
> I'm sorry, but if this is the standard, I really can't describe it other than it is...to say nothing of your considering safety less important, as you say, in the "inner city" that is my home.
My claim isn't that safety is less important in city driving, it's that driving is far safer due to lower speeds. There's more time to react and lower risk of catastrophic results when driving at 35mph. The challenge for a driver isn't sudden loss of control as you may experience at 65+mph. The city driving challenge is trying to track markings, signage, pedestrians, and parked cars while also navigating and managing the vehicle's basic operation. FSD can track all of that without distraction and leave the driver responsible for more human reasoning tasks.
> I don't know what this means.
You failed to account for the possibility of another driver's error. If you weren't taught to do that as I was, the mistake is not entirely your own, but it was still a mistake that would have led you into an accident had your vehicle not rescued you.
> There's more time to react and lower risk of catastrophic results when driving at 35mph.
Not for me. You're the one wearing power armor, remember.
Which is just worse.
When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.
Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
Not sure about your car but the car I have with augmented cruise requires hands on wheel. Turns off otherwise. (Volvo XC90)
I agree that there are situations where what I do as a trained driver is different from augmented cruise.
A good example (or perhaps I'm wrong) is this: in a lane, car pulls into lane in front of me and between the car further ahead. Now I don't have enough space in between me and that new entrant. But instead of using brakes (unless eggregious), I bleed speed until I make space I want. Augmented cruise doesn't do that - it hits brakes.
So, from behind, I think it looks like I'm using my brakes a lot more than I am when on augmented cruise. And excessive brake use distracts the driver behind me.
Sure, but the practical experience is that FSD is fairly predictable. It's just a matter of personal preference that comes from experience. I wouldn't impose a system like FSD on everybody.
I'm a >90% FSD user, and I approve this sentiment. My wife hates it for the mistakes it makes (eg. seems like there is recent shadow recognition regression) and "errors in judgement" (not getting in the turn lane in a timely manner), she would never use it on her own.
I've got plenty of experience, and (feel as though) I know most of it's failure points. I had to drive my 30 minute commute last week, and it was decidedly unfun. I have seen the future and I don't want to go back.
96% here, including DC and Baltimore. Besides the bizarre Navigation choices and waiting to long for lane changes, FSD has reached essentially zero interventions outside of bad mapping situations. I really wish Tesla would use better map data, for sure.
A self driving car should have no steering wheel. If it has a steering wheel it is a vote of no confidence from the manufacturer.
I don't really buy that. There are a lot of situations (e.g. being directed to park in a space at a fairgrounds, ski area, or whatever) that you can't reasonably expect AFAIK to be programmed into a car's computer. Even if a car can legitimately handle roads under most circumstances, they're not going to be able to handle everything.
I think their point was "it's not ready yet."
"Because the Origin does not have manual controls, the NHTSA must issue an exception to the Federal Motor Vehicle Safety Standards to permit operation on public roads"
Too bad that project failed.
https://en.wikipedia.org/wiki/Cruise_(autonomous_vehicle)
Throttle and yoke aren't a vote of no confidence from aircraft manufacturers. Some modes of operation are suitable for autopilot and some are not.
Would it be a vote of no confidence in Full Self Flying?
No, it would be an acknowledgement of the lack of perfection in human systems so far.
I mean, they kinda are.
Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.
That presents an interesting failure mode challenge.
Well we don't have any self driving cars outside of San Francisco. Only cars with advanced driver assistance.
Quite a few more places have them now:
https://support.google.com/waymo/answer/9059119?hl=en
Also in Vegas (Zoox), and China has their own competitive market of self-driving taxis.
How do you reverse such a car into your own driveway that's positioned in a funny way at an angle and an incline? What if you're parking off road for any reason? Like, you have to be able to manoeuvre your own vehicle sometimes.
To be fair, that report says
> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”
It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.
It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.
Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
IDK, this has the same unethical energy as police turning off body cameras.
in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.
In the worst case, this is intentional.
In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.
This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
> the "right thing to do" is NOT turn off the cameras just before a collision
Source for autopilot being disabled “seconds before a crash” also disabling cameras? (Sorry if I missed it above.)
This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, it’s all too unexpected.
Yep, your comment reminds me of a time my mother was about to hit a bird in the road. However, she was too busy arguing with the passenger to notice, and her driving was starting to become erratic already. I decided not to tell her because I knew that the shock could cause her do something more drastic like crash the car to try and avoid it.
I guess i'll step in for the counter.
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate?
By anticipating further ahead. If it finds itself into a situation that it can't get itself out of, it means it should have made more defensive choices earlier or relinquish control earlier. And if it doesn't have either the reasoning capacity or the spatial awareness data to do that, it is not fit for general usage and should be pulled.
Was this case FSD or was this earliest generation technology? And does this still happen?
I agree you right in that's what you expect to happen.
Maybe the car should not have this dangerous feature in the first place? Or maybe train drivers thoroughly and frequently for when this situation arises it becomes less dangerous.
It seems to me FSD for Tesla is not ready to go into Prod as it is now.
The few Tesla post-mortems I’ve read early on stated that FSD turned off before impact and used this as a defence to their system. If they shared that this happened 1 second before impact (so far too late for a human to respond), I’d have sympathy. I have never read a Tesla statement that contained this information.
For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.
This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.
Agreed. I also feel like there is a world of difference between the driver deliberately assuming control at the last second because they notice that an accident is about to happen, and the car itself yielding control unprompted because it thinks an accident is about to happen.
The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
> It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.
So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.
That's still not a good look.
And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?
AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.
EDIT: IIRC the threshold for disengagement is 1s.
>> Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report
> It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?
That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.
I'm just stating what I remember, I'm not trying to defend Tesla.
For 1) it's the first time I hear it from a technical point of view - Tesla's dashcam records continuously for the last 10m, and should save the data on the internal computer in case of a crash and send it back to Tesla if feasible AFAIR (I'm an owner). IIRC it's not the first case though where Tesla claimed the data wasn't available or corrupted, and then it was actually recovered some time later after pressure from authorities. So I think technically the data is there, but also believe Tesla is behaving unethically and dishonestly to cover up or delay retrieval.
2) I often hear it as FUD, as in: AP/FSD was off, the user just did it by accident, wasn't accustomed to it, or just didn't know how it worked. AFAIR most of the accidents had the data released and showed some of the following: user touched steering wheel and disengaged autosteer/FSD (whether knowingly or by accident), user was pressing accelerator pedal by accident, user was pressing accelerator instead of brake, etc etc
Disregarding the fact that NHTSA findings apparently contradict it (though that may just be a more recent change than the 2022 report), Tesla claims to use five seconds before a collision event as the threshold for their data reporting on their FSD marketing page:
> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]
In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.
If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.
For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.
0. https://www.tesla.com/fsd/safety#:~:text=within five seconds