If I understood correctly, this is about finding solutions to problems that have an infinite solution space, where new information does not constrain it.
Humans don't have the processing power to traverse such vast spaces. We use heuristics, in the same way a chess player does not iterate over all possible moves.
It's a valid point to make, however I'd say this just points to any AGI-like system having the same epistemological issues as humans, and there's no way around it because of the nature of information.
Stephen Wolfram's computational irreducibility is another one of the issues any self-guided, phyiscally grounded computing engine must have. There are problems that need to be calculated whole. Thinking long and hard about possible end-states won't help. So one would rather have 10000 AGIs doing somewhat similar random search in the hopes that one finds something useful.
I guess this is what we do in global-scale scientific research.
I find Wolfram's computational irreducibility is a very important aspect when dealing with modern LLMs, because for them it can be reduced (here it can) to "some questions shouldn't be inferred, but computed". From recent tests, I played with a question when models had to find cities and countries that can be connected with a common vowel in the middle (like Oslo + Norway = Oslorway). Every "non-thinking" LLMs answered mostly wrong, but wrote a perfect html/js ready to use copy/paste script, that when run found all the correct results from the world. Recent "thinking" ones managed to make do with the prompt thinking but it was a long process ending up with one or two results. We just can't avoid computations for plenty of tasks
[dead]