Haha, this is the weird thing about definition debates, you often don't disagree about anything substantial =P thanks for the measured response.
> An AGI has to be autonomous. It cannot "go nuts" without handholding [...]
So I think this is where I get off your bus - regardless of what you call it, I think current agentic systems like claude code are already there. They can construct their own handholds as they go. I have a section in all my CLAUDE.md files that tells them to always develop within a feedback loop like a test, and to set it up themselves if necessary, for instance. It works remarkably well!
There are lots of aspects of human cognition they don't seem to share... like curiousity or a drive for survival (hopefully lol). And creativity is very bad right now - although even there I think there's evidence it has some ability to be creative. So if you want that in your AGI, yeah, it's got a ways to go.
Situation seems very murky for an impossibility theorem though (to me).
> in the end I think we can agree that "something like how we do thinking" is pretty close to any valid definition, no?
I agree, we aren't even close to human-level ability here. I just think that people get hung up on looking at a bunch of tensors, but to me at least the real complexity is when these things embed in an environment.
All these arguments considering pure Turing machines miss this, I think. You don't study ecology by taking organisms out individully and cutting them up. There's value in that, of course, but the interactions are where the really interesting stuff happens.