I think I'm misunderstanding - they're converting video into their representation which was bootstrapped with LIDAR, video and other sensors. I feel you're alluding to Tesla, but Tesla could never have this outcome since they never had a LIDAR phase.
(edit - I'm referring to deployed Tesla vehicles, I don't know what their research fleet comprises, but other commenters explain that this fleet does collect LIDAR)
They can and they do.
https://youtu.be/LFh9GAzHg1c?t=872
They've also built it into a full neural simulator.
https://youtu.be/LFh9GAzHg1c?t=1063
I think what we are seeing is that they both converged on the correct approach, one of them decided to talk about it, and it triggered disclosure all around since nobody wants to be seen as lagging.
I watched that video around both timestamps and didn't see or hear any mention of LIDAR, only of video.
Exactly: they convert video into a world model representation suitable for 3D exploration and simulation without using LIDAR (except perhaps for scale calibration).
My mistake - I misinterpreted your comment, but after re-reading more carefully, it's clear that the video confirms exactly what you said.
tesla is not impressive, I would never put my child in one
Tesla does collect LIDAR data (people have seen them doing it, it's just not on all of the cars) and they do generate depth maps from sensor data, but from the examples I've seen it is much lower resolution than these Waymo examples.
Tesla does it to map the areas to come up with high def maps for areas where their cars try to operate.
Tesla uses lidar to train their models to generate depth data out of camera input. I don’t think they have any high definition maps.