There's nothing rigorous about this. It's pure crackpottery.

As long as you claim to disprove AGI, it inherently follows that you need to prove that humans exceed the Turing computable to succeed. Since you specifically state that you are not trying to prove that humans exceed the Turing computable, you're demonstrating a fundamental lack of understanding of the problem.

> 3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.

This is only true if humans execeed the Turing computable, as otherwise humans are proof that this is something that an algorithmic system can do. So despite claiming that you're not trying to prove that humans execeed the Turing computable, you are making the claim that humans can.

> I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed).

This is a direct statement that you claim that humans are observed to exceed the Turing computable.

> then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅

This is fundamental to Turing equivalence. If there exist any Turing machine that can generate Σ′, then any Turing machine can generate Σ′.

Anything that is possible with any Turing machine, in fact, is possible with a machine with as few as 2 symbols (the smallest (2,3) Turing machine is usually 2 states and 3 symbols, but per Shannon you can always trade states for symbols, and so a (3,2) Turing machine is also possible). This is because you can always simulate an environment where a larger alphabet is encoded with multiple symbols.

> As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.

This is exactly the part that fails.

Any TM can simulate any other, and that by extension, any TM can be extended to any alphabet through simulation.

If you don't understand this, then you don't understand the very basics of Turing Machines.