Sure, which is why the words you copied are followed by these in my comment:
>… which are only uncovered when the program runs. You do have surprises with modeling (hence my caveats above)
If you already have a paper proof (which is the case in the original post), and you know it’s correct, then you already know all steps are provable — but if they aren’t translated correctly into the theorem prover’s model, or if some details don’t translate well, or if you’re missing some existing result that hasn’t been formalized, you can get stuck. AI doesn’t make that worse though.
> If you already have a paper proof (which is the case in the original post), and you know it’s correct
You may think it's correct until you realize that your argument transitively depends on some obscure results from the 70s that nobody had ever questioned but that turned out to be flawed. Probably salvageable, but now you suddenly need to prove something completely different that you might not even understand very well. There's an interview with Kevin Buzzard somewhere where he talks about how that happened to them (couldn't find a reference rn).