When a code doesn't compile, it doesn't kill anyone. But if a Waymo suddenly veers off the road, it creates a real threat. Waymos had to be safer than real human drivers for people to begin to trust them. Coding tools did not have to be better than humans for them to be adopted first. Its entirely possible for a human to make a catastrophic error. I imagine in the future, it will be more likely that a human makes such errors, just like its more likely that a human will make more errors driving a car.
My understanding is that waymo has gone on the record to say that they have human operators that remotely drive the vehicle in scenarios where their automated system is confused.
Which I assert is semantically equivalent to saying: Human drivers (even when operating at the diminished capacity of not even being present in the car) are less likely to make errors driving a car than AIs.
This is getting off topic but they did not say the remote humans drive the cars. The cars always drive themselves, the remote humans provide guidance when the car is not confident in any of the decisions it could make. The humans define a new route or tell the car it's ok to proceed forward