Apologies if this has been discussed in one of the many subthreads, but is there a reasonable explanation for the finding that the crashed Tesla "deleted its local copy" of the crash data?

> Within ~3 minutes of the crash, the Model S packaged sensor video, CAN‑bus, EDR, and other streams into a single “snapshot_collision_airbag-deployment.tar” file and pushed it to Tesla’s server, then deleted its local copy.

Putting aside the legal implications wrt evidence, etc — what is the ostensible justification for this functionality? To a layperson, it's as bizarre as designing a plane's black box to go ahead and delete data if it somehow makes a successful upload to the FAA cloud server. Why add complexity that reduces redundancy in this case?

Two that come to mind:

1) Embedded systems typically do not allow data to grow without bound. If they were going to keep debugging data, they'd have to limit it to the last N instances or so. In this case N=0. It seems like the goal here was to send troubleshooting data, not keep it around.

2) Persisting the data may expose the driver to additional risks. Beyond the immediate risks, someone could grab the module from the junkyard and extract the data. I can appreciate devices that take steps to prevent sensitive data from falling into the hand of third parties.

It would be trivial to set it up to only delete old instances when free space goes below a threshold.

If the data can expose the driver to additional risks, then the driver can be exposed by someone stealing the vehicle and harvesting that data. Again, that can be trivially protected against using encryption which would also protect in the instance that communication was disrupted so that the tar isn't uploaded/deleted.