Automated theorem provers are not new, in fact they are very old. One of the most automated is ACL2, which uses the well studied waterfall method (unrelated to waterfall development).

LLMs certainly use something similar, except they understand text as input. LLMs, especially used for marketing stunts, have way more computing power available than any theorem prover ever had. They probably do random restarts if a proof fails which amounts to partially brute forcing.

Lawrence Paulson correctly complained about some of the hype that Lean/LLMs are getting.

ACL2 even uses formulaic text output that describes the proof in human language, despite being all in Common Lisp and not a mythical clanker.

They do not think and use old and well established algorithms or perhaps novel ones that were added.

LLMs certainly use something similar

They certainly do not. Read the papers where the IMO results were presented. No tools of any kind were used.

Proof search isn't new, but I don't think that captures the value of LLMs.

They act as a learned proposal mechanism on top of hard search. Things like suggesting relevant lemmas, tactics, turning intent into formal steps, and ranking branches based on trained knowledge.

Maybe a kind of learned "intuition engine", from a large corpus of mathematical text, that still has to pass a formal checker. This is not really something we've had to this extent before.

> They do not think

That claim seems less useful, unless “think” is defined in a way that predicts some difference in capability. If the objection is that LLMs are not conscious, fine, but that doesn't say much about whether they can help produce correct formal proofs.