> > While an error rate of 1-in-1,000 seems low, [...], on a task that requires successful execution of thousands of steps in a row, such a system results in inevitable failure.

> This is also why (edit: non-LIDAR) FSD cars are an illusion.

In this scenario, Waymo’s AI is executing thousands of steps in a row. The fact that it uses LIDAR for sensing doesn’t change that. It’s still AI driving you around no matter what its eyes are made of.

Waymo is a counterexample to the point you were making and their use of LIDAR doesn’t change that.

No because safety is guaranteed by the LIDAR and navigation is done by GPS+classical algorithms. Mistakes made by the AI can be overcome by those two non-AI approaches + reiterating the AI-based steps.

LIDAR cannot possibly guarantee safety. It is a sensor.

That's like saying an algorithm cannot guarantee safety because it is not an actuator.

An algorithm controls the actuator. The sensor does not control the algorithms or the actuators.

Look, it is a hell of a lot safer to use an input that does not hallucinate objects than an input that does.