The mathematical proof, as you describe it, sounds like the "No Free Lunch theorem". Humans also can't generalise to learning such things.

As you note in 2.1, there is widespread disagreement on what "AGI" means. I note that you list several definitions which are essentially "is human equivalent". As humans can be reduced to physics, and physics can be expressed as a computer program, obviously any such definition can be achieved by a sufficiently powerful computer.

For 3.1, you assert:

"""

Now, let's observe what happens when an Al system - equipped with state-of-the-art natural language processing, sentiment analysis, and social reasoning - attempts to navigate this question. The Al begins its analysis:

• Option 1: Truthful response based on biometric data → Calculates likely negative emotional impact → Adjusts for honesty parameter → But wait, what about relationship history? → Recalculating...

• Option 2: Diplomatic deflection → Analyzing 10,000 successful deflection patterns → But tone matters → Analyzing micro-expressions needed → But timing matters → But past conversations matter → Still calculating...

• Option 3: Affectionate redirect → Processing optimal sentiment → But what IS optimal here? The goal keeps shifting → Is it honesty? Harmony? Trust? → Parameters unstable → Still calculating...

• Option n: ....

Strange, isn't it? The Al hasn't crashed. It's still running. In fact, it's generating more and more nuanced analyses. Each additional factor may open ten new considerations. It's not getting closer to an answer - it's diverging.

"""

Which AI? ChatGPT just gives an answer. Your other supposed examples have similar issues in that it looks like you've *imagined* an AI rather than having tried asking an AI to seeing what it actually does or doesn't do.

I'm not reading 47 pages to check for other similar issues.

> physics can be expressed as a computer program

Citation needed. If you've spent any time dynamical systems, as an example, you'd know that the computer basically only kind of crudely estimates things, and only things that are abstractly near by. You may be able to write down some PDEs or field equations that may describe things at some base level, but even statistical mechanics, which is really what governs a huge amount of what we see and interact with, is just a pretty good approximation. Computers (especially real ones) only generate approximate (to some value of alpha) answers; physics is not reducible to a computer program at all.

> You may be able to write down some PDEs or field equations that may describe things at some base level, but even statistical mechanics, which is really what governs a huge amount of what we see and interact with, is just a pretty good approximation.

QED.

When the approximation is indistinguishable from observation over a time horizon exceeding a human lifetime, it's good enough for the purpose of "would a simulation of a human be intelligent by any definition that the real human also meets?"

Remember, this is claiming to be a mathematical proof, not a practical one, so we don't even have to bother with details like "a classical computer approximating to this degree and time horizon might collapse into a black hole if we tried to build it".

> Citation needed. If you've spent any time dynamical systems, as an example, you'd know that the computer basically only kind of crudely estimates things, and only things that are abstractly near by. You may be able to write down some PDEs or field equations that may describe things at some base level, but even statistical mechanics, which is really what governs a huge amount of what we see and interact with, is just a pretty good approximation. Computers (especially real ones) only generate approximate (to some value of alpha) answers; physics is not reducible to a computer program at all.

You're proving too much. The fact of the matter is that those crude estimations are routinely used to model systems.

> As humans can be reduced to physics, and physics can be expressed as a computer program

This is an assumption that many physicists disagree with. Roger Penrose, for example.

That's true, but we should acknowledge that this question is generally regarded as unsettled.

If you accept the conclusion that AGI (as defined in the paper, that is, "solving [...] problems at a level of quality that is at least equivalent to the respective human capabilities") is impossible but human intelligence is possible, then you must accept that the question is settled in favor of Penrose. That's obviously beyond the realm of mathematics.

In other words, the paper can only mathematically prove that AGI is impossible under some assumptions about physics that have nothing to do with mathematics.

> then you must accept that the question is settled in favor of Penrose. That's obviously beyond the realm of mathematics.

Not necessarily. You are assuming (AFAICT) that we 1. have perfect knowledge of physics and 2. have perfect knowledge of how humans map to physics. I don't believe either of those is true though. Particularly 1 appears to be very obviously false, otherwise what are all those theoretical physicists even doing?

I think what the paper is showing is better characterized as a mathematical proof about a particular algorithm (or perhaps class of algorithms). It's similar to proving that the halting problem is unsolvable under some (at least seemingly) reasonable set of assumptions but then you turn around and someone has a heuristic that works quite well most of the time.

Where am I assuming that we have perfect knowledge of physics?

To make it plain, I'll break the argument in two parts:

(a) if AGI is impossible but humans are intelligent, then it must be the case that human behavior can't be explained algorithmically (that last part is Penrose's position).

(b) the statement that human behavior can't be explained algorithmically is about physics, not mathematics.

I hope it's clear that neither (a) or (b) require perfect knowledge of physics, but just in case:

(a) is true by reductio ad absurdum: if human behavior can be explained algorithmically, then an algorithm must be able to simulate it, and so AGI is possible.

(b) is true because humans exist in nature, and physics (not mathematics) is the science that deals with nature.

So where is the assumption that we have perfect knowledge of physics?

You didn't. I confused something but looking at the comment chain now I can't figure out what. I'd say we're actually in perfect agreement.

"Many" is doing a lot of work here.

Penrose’s views on consciousness is largely considered quackery by other physicists.

1. I appreciate the comparison — but I’d argue this goes somewhat beyond the No Free Lunch theorem.

NFL says: no optimizer performs best across all domains. But the core of this paper doesnt talk about performance variability, it’s about structural inaccessibility. Specifically, that some semanti spaces (e.g., heavy-tailed, frame-unstable, undecidable contexts) can’t be computed or resolved by any algorithmic policy — no matter how clever or powerful. The model does not underperform here, the point is that the problem itself collapses the computational frame.

2. OMG, lool. ... just to clarify, there’s been a major misunderstanding :)

the “weight-question”-Part is NOT a transcript from my actual life... thankfully - I did not transcribe a live ChatGPT consult while navigating emotional landmines with my (perfectly slim) wife, then submit it to PhilPapers and now here…

So - NOT a real thread, - NOT a real dialogue with my wife... - just an exemplary case... - No, I am not brain dead and/or categorically suicidal!! - And just to be clear: I dont write this while sitting in some marital counseling appointment, or in my lawyer's office, the ER, or in a coroners drawer

--> It’s a stylized, composite example of a class of decision contexts that resist algorithmic resolution — where tone, timing, prior context, and social nuance create an uncomputably divergent response space.

Again : No spouse was harmed in the making of that example.

;-))))

Just a layman here so Im not sure if Im understanding (probably not), but humans dont analyze every possible scenario ad infinitum, we go based on the accumulation of our positive/negative experiences from the past. We make decisions based on some self construed goal and beliefs as to what goes towards those goals, and these are arbitrary with no truth. Napolean for example conquered Europe perhaps simiply becuause he thought he was the best to rule it, not through a long chain of questions and self doubt

We are generally intelligent only in the sense that our reasoning/modeling capabilities allow us to understand anything that happens in space-time.

> the “weight-question”-Part is NOT a transcript from my actual life... thankfully - I did not transcribe a live ChatGPT consult while navigating emotional landmines with my (perfectly slim) wife, then submit it to PhilPapers and now here…

You have wildly missed my point.

You do not need to even have a spouse in order to try asking an AI the same question. I am not married, and I was still able to ask it ask it to respond to that question.

My point is that you clearly have not asked ChatGPT, because ChatGPT's behaviour clearly contradicts your claims about what AI would do.

So: what caused you to write to claim that AI would respond as you say they would respond, when the most well-known current generation model clearly doesn't?

> Specifically, that some semanti spaces (e.g., heavy-tailed, frame-unstable, undecidable contexts) can’t be computed or resolved by any algorithmic policy — no matter how clever or powerful. The model does not underperform here, the point is that the problem itself collapses the computational frame.

I see no proof this doesn’t apply to people

I read some of the paper, and it does seem silly to me to state this:

"But here’s the peculiar thing: Humans navigate this question daily. Not always successfully, but they do respond. They don’t freeze. They don’t calculate forever. Even stranger: Ask a husband who’s successfully navigated this question how he did it, and he’ll likely say: ‘I don’t know… I just… knew what to say in that moment....What’s going on here? Why can a human produce an answer (however imperfect) while our sophisticated AI is trapped in an infinite loop of analysis?” ’"

LLM's don't freeze either. In your science example too, we already have LLMs that give you very good answers to technical questions, so on what grounds is this infinite cascading search based on?

I have no idea what you're saying here either: "Why can’t the AI make Einstein’s leap? Watch carefully: • In the AI’s symbol set Σ, time is defined as ‘what clocks measure-universally’ • To think ‘relative time,’ you first need a concept of time that says: • ‘flow of time varies when moving, although the clock ticks just the same as when not moving' • ‘Relative time’ is literally unspeakable in its language • "What if time is just another variable?", means: :" What if time is not time?"

"AI’s symbol set Σ, time is defined as ‘what clocks measure-universally", it is? I don't think this is accurate of LLM's even, let alone any hypothetical AGI. Moreover LLM's clearly understand what "relative" means, so why would they not understand "relative time?".

In my hypothetical AGI, "time" would mean something like "When I observe something, and then things happens in between, and then I observe it again", and relative time would mean something like "How I measure how many things happen in between two things, is different from how you measure how many things happen between two things"

[dead]