My hunch is that LLMs are nowhere near intelligent enough to make brilliant conceptual leaps. At least not anytime soon.

Where I think AI models might prove useful is in those cases where the problem is well defined, where formal methods can be used to validate the correctness of (partial) solutions, and where the search space is so large that work towards a proof is based on "vibes" or intuition. Vibes can be trained through reinforcement learning.

Some computer assisted proofs are already hundreds of pages or gigabytes long. I think it's a pretty safe bet that really long and convoluted proofs that can only be verified by computers will become more common.

https://en.wikipedia.org/wiki/Computer-assisted_proof

They don't need to be intelligent to make conceptual leaps. DeepMind stuff just does a bunch of random RL experiments until it finds something that works.