The core of this little essay seems to be this:
Instead of "I understand the causal mechanism and can predict what happens if I change X," you get something more like "I have a sufficiently rich model that I can simulate what happens if I change X, with probabilistic confidence." The answers are distributions, not deterministic outputs. That's a different kind of knowing.
At the beginning this sounded like, "hard problems are complex, machine learning can help us manage complexity, therefore we will be able to solve hard problems with machine learning", which betrays a shallowness of understanding. I think what this essay argues here is a little deeper than that trite tech-bro hype meme.
But I disagree with this conclusion: I don't know that we can begin to build these models to begin with or that our new LLM/transformer-powered tools can help solve these problems. If simulation were the answer to everything, why will new ML tools make a significant difference in ways that existing simulation tools do not?
Stuff like AlphaFold is amazing—I'm not saying that better medical results won't come about from ML—but I feel like there's some substance missing and that even this level of excitement that the author expresses here needs more and better backing.