The paper is skipping over the definition of AI. It jumps right into AGI, and that depends on what AI means. It could be LLMs, deep neural networks, or any possible implementation on a Turing machine. The latter I suspect would be extremely difficult to prove. So far almost everything can be simulated by Turing machines and there's no reason it couldn't also simulate human brains, and therefore AGI. Even if the claim is that human brains are not enough for GI (and that our bodies are also part of the intelligence equation), we could still simulate an entire human being down to every cell, in theory (although in practice it wouldn't happen anytime soon, unless maybe quantum computers, but I digress).
Still an interesting take and will need to dive in more, but already if we assume the brain is doing information processing then the immediate question is how can the brain avoid this problem, as others are pointing out. Is biological computation/intelligence special?
Turing machines only model computation. Real life is interaction. Check the work of Peter Wegner. When interaction machines enter into the picture, AI can be embodied, situated and participate in adaptation processes. The emergent behaviour may bring AGI in a pragmatic perspective. But interaction is far more expressive than computation rendering theoretical analysis challenging.
Interaction is just another computation, and clearly we can interact with computers, and also simulate that interaction within the computer, so yes Turing machines can handle it. I'll check out Wegner.