I'm not sure if this will help, but happy to elaborate further:
The set of Turing computable functions is computationally equivalent to the lambda calculus, is computationally equivalent to the generally recursive functions. You don't need to understand those terms, only to know that these functions define the set of functions we believe to include all computable functions. (There are functions that we know to not be computable, such as e.g. a general solution to the halting problem)
That is, we don't know of any possible way of defining a function that can be computed that isn't in those sets.
This is basically the Church-Turing thesis: That a function on the natural numbers can be effectively computable (note: this has a very specific meaning, it's not about performance) only if it is computable by a Turing machine.
Now, any Turing machine can simulate any other Turing machine. Possibly in a crazy amount of time, but still.
The brain is at least a Turing machine in terms of computabilitity if we treat "IO" (speaking, hearing, for example) as the "tape" (the medium of storage in the original description of the Turing machine). We can prove this, since the smallest Turing machine is a trivial machine with 2 states and 3 symbols that any moderate functional human is capable of "executing" with pen and paper.
(As an aside: It's almost hard to construct a useful computational system that isn't Turing complete; "accidental Turing completeness" regularly happens, because it is very trivial to end up with a Turing complete system)
An LLM with a loop around it and temperature set to 0 can trivially be shown to be able to execute the same steps, using context as input and the next token as output to simulate the tape, and so such a system is Turing complete as well.
(Note: In both cases, this could require a program, but since for any Turing machine of a given size we can "embed" parts of the program by constructing a more complex Turing machine with more symbols or states that encode some of the actions of the program, such a program can inherently be embedded in the machine itself by constructing a complex enough Turing machine)
Assuming we use a definition of intelligence that a human will meet, then because all Turing machines can simulate each other, then the only way of showing that an artificial intelligence can not theoretically be constructed to at least meet the same bar is by showing that humans can compute more than the Turing computable.
If we can't then "worst case" AGI can be constructed by simulating every computational step of the human brain.
Any other argument about the impossibility of AGI inherently needs to contain a something that disproves the Church-Turing thesis.
As such, it's a massive red flag when someone claims to have a proof that AGI isn't possible, but haven't even mentioned the Church-Turing thesis.
Compute functions != Intelligence though.
For example learning from experience (which LLMs cannot do because they cannot experience anything and they cannot learn) is clearly an attribute of an intelligent machine.
LLMs can tell you about the taste of a beer, but we know that they have never tasted a beer. Flight simulators can't take you to Australia, no matter how well they simulate the experience.
> Compute functions != Intelligence though.
If that is true, you have a proof that the Church-Turing thesis is false.
> LLMs can tell you about the taste of a beer, but we know that they have never tasted a beer. Flight simulators can't take you to Australia, no matter how well they simulate the experience.
For this to be relevant, you'd need to show that there are possible sensory inputs that can't be simulated to a point where the "brain" in question - be it natural or artificial - can't tell the difference.
Which again, would boil down to proving the Church-Turing thesis wrong.
I think that may depend on how someone defines intelligence. For example, if intelligence includes the ability to feel emotion or appreciate art, then I think it becomes much more plausible that intelligence is not the same as computation.
Of course, simply stating that isn't in of itself a philisophically rigorous argument. However, given that not everyone has training in philosophy and it may not even be possible to prove whether "feeling emotion" can be achieved via computation, I think it's a reasonable argument.
I think if they define intelligence that way, it isn't a very interesting discussion, because we're back to Church-Turing: Either they can show that this actually has an effect on the ability to reason and the possible outputs of the system that somehow exceeds the Turing computable, or those aspects are irrelevant to an outside observer of said entity because the entity would still be able to act in exactly the same way.
I can't prove that you have a subjective experience of feeling emotion, and you can't prove that I do - we can only determine that either one of us acts as if we do.
And so this is all rather orthogonal to how we define intelligence, as whether or not a simulation can simulate such aspects as "actual" feeling is only relevant if the Church-Turing thesis is proven wrong.
What program would a Turing machine run to spontaneously prove the incompleteness theorem?
Can you prove such a program may exist?
Assuming the Church-Turing thesis is true, the existence of any brain now or in the past capable of proving it is proof that such a program may exist.
If the Church-Turing thesis can be proven false, conversely, then it may be possible that such a program can't exist - it is a necessary but not sufficient condition for the Church-Turing thesis to be false.
Given we have no evidence to suggest the Church-Turing thesis to be false, or for it to be possible for it to be false, the burden falls on those making the utterly extraordinary claim that they can't exist to actually provide evidence for those claims.
Can you prove the Church-Turing thesis false? Or even give a suggestion of what a function that might be computable but not Turing computable would look like?
Keep in mind that explaining how to compute a function step by step would need to contain at least one step that can't be explain in a way that allows the step to be computable by a Turing machine, or the explanation itself would instantly disprove your claim.
The very notion is so extraordinary as to require truly extraordinary proof and there is none.
A single example of a function that is not Turing computable that human intelligence can compute should be low burden if we can exceed the Turing computable.
Where are the examples?
> Assuming the Church-Turing thesis is true, the existence of any brain now or in the past capable of proving it is proof that such a program may exist.
Doesn't that assume that the brain is a Turing machine or equivalent to one? My understanding is that the exact nature of the brain and how it relates to the mind is still an open question.
That is exactly the point.
If the Church-Turing thesis is true, then the brain is a Turing machine / Turing equivalent.
And so, assuming Church-Turing is true, then the existence of the brain is proof of the possibility of AGI, because any Turing machine can simulate any other Turing machine (possibly too slowly to be practical, but it denies its impossibility).
And so, any proof that AGI is "mathematically impossible" as the title claims, is inherently going to contain within it a proof that the Church-Turing thesis is false.
In which case there should be at least one example of a function a human brain can compute that a Turing machine can't.
An accurate-enough physical simulation of Kurt Gödel's brain.
Such a program may exist- unless you think such a simulation of a physical system is uncomputable, or that there is some non-physical process going on in that brain.
> then the only way of showing that an artificial intelligence can not theoretically be constructed to at least meet the same bar is by showing that humans can compute more than the Turing computable.
I would reframe: the only way of showing that artificial intelligence can be constructed is by showing that humans cannot compute more than the Turing computable.
Given that Turing computable functions are a vanishingly small subset of all functions, I would posit that that is a rather large hurdle to meet. Turing machines (and equivalents) are predicated on a finite alphabet / state space, which seems woefully inadequate to fully describe our clearly infinitary reality.
Given that we know of no computable function that isn't Turing computable, and the set of Turing computable functions is known to be equivalent to the lambda calculus and equivalent to the set of general recursive functions, what is an immensely large hurdle would be to show even a single example of a computable function that is not Turing computable.
If you can do so, you'd have proven Turing, Kleen, Church, Goedel wrong, and disproven the Church-Turing thesis.
No such example is known to exist, and no such function is thought to be possible.
> Turing machines (and equivalents) are predicated on a finite alphabet / state space, which seems woefully inadequate to fully describe our clearly infinitary reality.
1/3 symbolically represents an infinite process. The notion that a finite alphabet can't describe inifity is trivially flawed.
Function != Computable Function / general recursive function.
That's my point - computable functions are a [vanishingly] small subset of all functions.
For example (and close to our hearts!), the Halting Problem. There is a function from valid programs to halt/not-halt. This is clearly a function, as it has a well defined domain and co-domain, and produces the same output for the same input. However it is not computable!
For sure a finite alphabet can describe an infinity as you show - but not all infinity. For example almost all Real numbers cannot be defined/described with a finite string in a finite alphabet (they can of course be defined with countably infinite strings in a finite alphabet).
Non-computable functions are not relevant to this discussion, though, because humans can't compute them either, and so inherently an AGI need not be able to compute them.
The point remains that we know of no function that is computable to humans that is not in the Turing computable / general recursive function / lambda calculus set, and absent any indication that any such function is even possible, much less an example, it is no more reasonable to believe humans exceed the Turing computable than that we're surrounded by invisible pink unicorns, and the evidence would need to be equally extraordinary for there to be any reason to entertain the idea.
Humans do a lot of stuff that is hard to 'functionalise', computable or otherwise, so I'd say the burden of proof is on you. What's the function for creating a work of art? Or driving a car?