I'm pretty sure the central permise is flawed because human computation over infinite problem spaces is subject to the halting problem too.

Skimmed and saw this, decided it was just a crank at that moment. The problem is not well defined enough and you could easily apply the same argument to humans. It's just abusing mathematical notation to make subjective arguments:

A.3.1. Example: The Weight Question as an Irreducibly Infinite Space

Let us demonstrate that the well-known example of the “weight question” (see Sectin 2.1) meets the formal criteria of an irreducibly infinite decision space as defined above.

We define the decision space X as the set of all contextually valid responses (verbal and nonverbal) to the utterance: “Darling, please be honest: have I gained weight?”

Let Σ be the symbol space available to the AI system (e.g., predefined vocabulary, intonation classes, gesture tags). Let R be the transformation rules the system uses to generate candidate outputs.

Then:

1. Non-Enumerability: There exists no total computable function such that every socially acceptable response is eventually enumerated. Reason: The meaning and acceptability of any response depend on unbounded, semantically unstable factors (facial expressions, past relationship dynamics, momentary tone, cultural norms), which cannot be finitely encoded.

-----

Just want to add that I don't mean to be an asshole here, in case this stays the top reply. I'm quite interested in quantifiable measures of intelligence myself, and it takes guts to put something like this out there with your name on it.

What I think what might help the author is to think of his attempts to disprove AGI as a more adversarial mini-max. Whatever theory or example you have regarding an example that is not possible under AGI, why could a better designed intelligence not achieve it, and why does it not also apply to humans?

For example, instead of assuming that an AI will search infinitely without giving up, consider whether the AI might put a limit on the time it expends solving a problem, or decide to think about something besides aether if it's taking too long to solve that problem that way, or give up because the problem isn't important enough to keep going, or whether humans suffer from epistemic uncertainty too.