> You don’t have “last minute surprises”

Not sure about that. One of the steps in your plan may be surprisingly hard to prove. You're going to solve the easier steps in your plan first and then may get bogged down at 98%.

Sure, which is why the words you copied are followed by these in my comment:

>… which are only uncovered when the program runs. You do have surprises with modeling (hence my caveats above)

If you already have a paper proof (which is the case in the original post), and you know it’s correct, then you already know all steps are provable — but if they aren’t translated correctly into the theorem prover’s model, or if some details don’t translate well, or if you’re missing some existing result that hasn’t been formalized, you can get stuck. AI doesn’t make that worse though.

> If you already have a paper proof (which is the case in the original post), and you know it’s correct

You may think it's correct until you realize that your argument transitively depends on some obscure results from the 70s that nobody had ever questioned but that turned out to be flawed. Probably salvageable, but now you suddenly need to prove something completely different that you might not even understand very well. There's an interview with Kevin Buzzard somewhere where he talks about how that happened to them (couldn't find a reference rn).