If a human brain works why can't AGI?

I think the problem with "AGI" is that people don't want "AGI," they want Einstein as their butler. A merely generally intelligent AI might be only as intelligent as the average human.

One problem with the paper is that it defines AGI in such a way that if it fails to solve a problem that is inherently unsolvable, AGI can be written off as impossible. It tries to synthesise a definition from different sources whose own definitions don't have any particular reason to overlap in any meaningful way.

I'm just not sure "AGI" is a useful term at this point. It's either something trivially reachable from what we can see today or something totally impossible, depending entirely on the preference of the speaker.