Most LLMs I've tried come up with invalid reasoning, many confuse empirical evidence (of simulating it for a few steps and it 'most probably not halting') with definite proof that it never does, some create invalid probabilistic mathematical arguments to the same effect

Others I've tried are caught in a loop of trying to prove the same, insufficient approach over and over again, lacking explorative and "creative" behavior

Generally it seems that LLMs lack the 'motivation' to actually try to solve unsolved problems especially if they know that they are unsolved or difficult