One problem with the paper is that it defines AGI in such a way that if it fails to solve a problem that is inherently unsolvable, AGI can be written off as impossible. It tries to synthesise a definition from different sources whose own definitions don't have any particular reason to overlap in any meaningful way.
I'm just not sure "AGI" is a useful term at this point. It's either something trivially reachable from what we can see today or something totally impossible, depending entirely on the preference of the speaker.