You're missing the point. Consider this:

Mathematicians use LLMs. Obviously, they don't trust LLM to do math. But LLM can help with formalizing a theorem, then finding a formal proof. It's usually a very tedious thing - but LLMs are _already_ quite good at that. In the end you get a proof which gets checked by normal proof-checking software (not LLM!), you can also inspect, break into parts, etc.

You really need to look into detail rather than dismiss wholesale ("It made a math error so it's bad at math" is wrong.)