Thanks — and yes, Penrose’s argument is well known.

But this isn’t that, as I’m not making a claim about consciousness or invoking quantum physics or microtubules (which, I agree, are highly speculative).

The core of my argument is based on computability and information theory — not biology. Specifically: that algorithmic systems hit hard formal limits in decision contexts with irreducible complexity or semantic divergence, and those limits are provable using existing mathematical tools (Shannon, Rice, etc.).

So in some way, this is the non-microtubule version of AI critique. I don’t have the physics background to engage in Nobel-level quantum speculation — and, luckily, it’s not needed here.

Seems like all you needed to prove the general case is Goedelian incompleteness. As with incompleteness, entropy-based arguments may never actually interfere with getting work done in the real world with real AI tools.