> And - as wonderfully remarkable as such a system might be - it would, for our investigation, be neither appropriate nor fair to overburden AGI by an operational definition whose implicit metaphysics and its latent ontological worldviews lead to the epistemology of what we might call a “total isomorphic a priori” that produces an algorithmic world-formula that is identical with the world itself (which would then make the world an ontological algorithm...?).

> Anyway, this is not part of the questions this paper seeks to answer. Neither will we wonder in what way it could make sense to measure the strength of a model by its ability to find its relative position to the object it models. Instead, we chose to stay ignorant - or agnostic? - and take this fallible system called "human". As a point of reference.

Cowards.

That's the main counter argument and acknowledging its existence without addressing it is a craven dodge.

Assuming the assumptions[1] are true, then human intelligence isn't even able to be formalized under the same pretext.

Either human intelligence isn't

1. Algorithmic. The main point of contention. If humans aren't algorithmically reducible - even at the level computation of physics, then human cognition is supernatural.

2. Autonomous. Trivially true given that humans are the baseline.

3. Comprehensive (general): Trivially true since humans are the baseline.

4. Competent: Trivially true given humans are the baseline.

I'm not sure how they reconcile this given that they simply dodge the consequences that it implies.

Overall, not a great paper. It's much more likely that their formalism is wrong than their conclusion.

Footnotes

1. not even the consequences, unfortunately for the authors.

Just to make sure I understand:

–Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted? Or better: is that metaphysical setup an argument?

If that’s the game, fine. Here we go:

– The claim that one can build a true, perfectly detailed, exact map of reality is… well... ambitious. It sits remarkably far from anything resembling science , since it’s conveniently untouched by that nitpicky empirical thing called evidence. But sure: freed from falsifiability, it can dream big and give birth to its omnicartographic offspring.

– oh, quick follow-up: does that “perfect map” include itself? If so... say hi to Alan Turing. If not... well, greetings to Herr Goedel.

– Also: if the world only shows itself through perception and cognition, how exactly do you map it “as it truly is”? What are you comparing your map to — other observations? Another map?

– How many properties, relations, transformations, and dimensions does the world have? Over time? Across domains? Under multiple perspectives? Go ahead, I’ll wait... (oh, and: hi too.. you know who)

And btw the true detailed map of the world exists.... It’s the world.

It’s just sort of hard to get a copy of it. Not enough material available ... and/or not enough compute....

P.S. Sorry if that came off sharp — bit of a spur-of-the-moment reply. If you want to actually dig into this seriously, I’d be happy to.

> Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted?

If you are claiming that human intelligence is not "general", you'd better put a huge disclaimer on your text. You are free to redefine words to mean whatever you want, but if you use something so different from the way the entire world uses it, the onus is on you to make it very clear.

And the alternative is you claiming human intelligence is impossible... what would make your paper wrong.

I don't think that's a redefinition. "general" in common usage refers to something that spans all subtypes. For humans to be generally intelligent there would have to be no type of intelligence that they don't exhibit, that's a bold claim.

I mean, I think it is becoming increasingly obvious humans aren't doing as much as we thought they were. So yes, this seems like an overly ambitious definition of what we would in practice call agi. Can someone eli5 the requirement this paper puts on something to be considered a gi?