You’re misreading what I’m doing, and I suspect you’re also misdefining what a “proof” in this space needs to be.

I’m not assuming humans exceed the Turing computable. I’m not using human behavior as a proof of AGI’s impossibility. I’m doing something much more modest - and much more rigorous.

Here’s the actual chain:

1. There’s a formal boundary for algorithmic systems. It’s called symbolic containment. A system defined by a finite symbol set Σ and rule set R cannot generate a successor frame (Σ′, R′) where Σ′ introduces novel symbols not contained in Σ. This is not philosophy — this is structural containment, and it is provable.

2. Then I observe: in human intellectual history, we find recurring examples of frame expansion. Not optimization, not interpolation — expansion. New primitives. New rules. Special relativity didn’t emerge from Newton through deduction. It required symbols and structures that couldn’t be formed inside the original frame.

3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.

4. This leads to a conclusion: if AGI is an algorithmic system (finite symbols, finite rules, formal inference)then it will not be capable of frame jumps.And it is not incapable of that, because it lacks compute. The system is structurally bounded by what it is.

So your complaint that I “haven’t proven humans exceed Turing” is misplaced. I didn’t claim to. You’re asking me to prove something that I simply don’t need to assert .

I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed). Therefore, if humans are purely algorithmic, something’s missing in our understanding of how those systems operate. And if AGI remains within the current algorithmic paradigm, it will not do X. That’s what I’ve shown.

You can still believe humans are Turing machines, fine for me. But if this belief is to be more than some kind of religious statement, then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅. It is you that would need to show how uncomputable concepts emerge from computable substrates without violating containment (->andthat means: witout violating its own logic - as in formal systems, logic and containment end up as the same thing: Your symbol set defines your expressive space, step outside that, and you’re no longer reasoning — you’re redefining the space, the universe you’re reasoning in).

Otherwise, the limitation stands — and the claim that “AGI can do anything humans do” remains an ungrounded leap of faith.

Also: if you believe the only valid proof of AGI impossibility must rest on metaphysical redefinition of humanity as “super-Turing,” then you’ve set an artificial constraint that ensures no such proof could ever exist, no matter the logic.

That’s intellectually trading epistemic rigor for insulation.

As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.

There's nothing rigorous about this. It's pure crackpottery.

As long as you claim to disprove AGI, it inherently follows that you need to prove that humans exceed the Turing computable to succeed. Since you specifically state that you are not trying to prove that humans exceed the Turing computable, you're demonstrating a fundamental lack of understanding of the problem.

> 3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.

This is only true if humans execeed the Turing computable, as otherwise humans are proof that this is something that an algorithmic system can do. So despite claiming that you're not trying to prove that humans execeed the Turing computable, you are making the claim that humans can.

> I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed).

This is a direct statement that you claim that humans are observed to exceed the Turing computable.

> then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅

This is fundamental to Turing equivalence. If there exist any Turing machine that can generate Σ′, then any Turing machine can generate Σ′.

Anything that is possible with any Turing machine, in fact, is possible with a machine with as few as 2 symbols (the smallest (2,3) Turing machine is usually 2 states and 3 symbols, but per Shannon you can always trade states for symbols, and so a (3,2) Turing machine is also possible). This is because you can always simulate an environment where a larger alphabet is encoded with multiple symbols.

> As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.

This is exactly the part that fails.

Any TM can simulate any other, and that by extension, any TM can be extended to any alphabet through simulation.

If you don't understand this, then you don't understand the very basics of Turing Machines.