Yeah but ultimately it's all just function approximation, which produces some kind of conditional average. There's no getting away from that, which is why it surprises me that we expect them to be good at science.
They'll probably get really good at model approximation, as there's a clear reward signal, but in places where that feedback loop is not possible/very difficult then we shouldn't expect them to do well.