I agree, but I think there's an important distinction to be made.

In some cases, it just doesn't have the necessary information because the problem is too niche.

In other cases, it does have all the necessary information but fails to connect the dots, i.e. reasoning fails.

It is the latter issue that is affecting all LLMs to such a degree that I'm really becoming very sceptical of the current generation of LLMs for tasks that require reasoning.

They are still incredibly useful of course, but those reasoning claims are just false. There are no reasoning models.