Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving. (And also: I am bound by thermodynamics as my mother in Law is, still i get disarranged by her mere presence while I always have to put laxatives in her wine to counter that)
2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.
3. Physical laws - where do they really come from? From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable? I honestly don’t know.
In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.
Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"
> Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving.
You seem to be laboring under the mistaken idea that "algorithmic" does not encompass everything allowed by physics. But, humoring this idea, then if physical laws allow it, why can this "more than algorithmic" cognition not be done artificially? As you say - we can obviously do it. What magical line is preventing an artificial system from doing the same?
If by algorithmic you just mean anything that a Turing machine can do, then your theorem is asserting that the Church-Turing thesis isn't true.
Why not use that as the title of your paper? That a more fundamental claim.
The lack of mention of the Church-Turing thesis in both papers suggest he hasn't even considered that angle.
But it is the fundamental objection he would need to overcome.
There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.
> Basically because physical laws obviously allow more than algorithmic cognition and problem solving.
This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".
Your claim here also goes against the physical interpretation of the Church-Turing thesis.
Without rigorously addressing this, there is no point taking your papers seriously.
No problem here is you proof - although a bit long:
1. THEOREM: Let a semantic frame be defined as Ω = (Σ, R), where
Σ is a finite symbol set and R is a finite set of inference rules.
Let Ω′ = (Σ′, R′) be a candidate successor frame.
Define a frame jump as: Frame Jump Condition: Ω′ extends Ω if Σ′\Σ ≠ ∅ or R′\R ≠ ∅
Let P be a deterministic Turing machine (TM) operating entirely within Ω.
Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
(Whereas Σ = the set of all finite symbol strings in the frame; derivable outputs are formed from Σ under the inference rules R.)
Proof Sketch: P’s tape alphabet is fixed to Σ and symbols derived from Σ. By induction, no computation step can introduce a symbol not already in Σ. ∎
2. APPLICATION: Newton → Special Relativity
Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame) Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)
Let φ = “The speed of light is invariant in all inertial frames.” Let Tᴿ be the theory of special relativity. Let Pᴺ be a TM constrained to Σᴺ.
By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.
But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ
→ Therefore Pᴺ ⊬ φ → Tᴿ ⊈ L(Pᴺ)
Thus:
Special Relativity cannot be derived from Newtonian physics within its original formal frame.
3. EMPIRICAL CONFLICT Let: Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t) Axiom N₂: Ether model for light speed Data D: Michelson–Morley ⇒ c = const
In Ωᴺ, combining N₁ and N₂ with D leads to contradiction. Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ But by Lemma 1: impossible within Pᴺ. -> Frame must be exited to resolve data.
4. FRAME JUMP OBSERVATION
Einstein introduced Σᴿ — a new frame with new symbols and transformation rules. He did so without derivation from within Ωᴺ. That constitutes a frame jump.
5. FINALLY
A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅
B: Einstein was human
C: Therefore, humans can initiate frame jumps (i.e., generate formal systems containing symbols/rules not computable within the original system).
Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps. But human cognition demonstrably can.
QED.
BUT: Can Humans COMPUTE those functions? (As you asked)
-> Answer: a) No - because frame-jumping is not a computation.
It’s a generative act that lies outside the scope of computational derivation. Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.
In each case, the cognitive system fails not from error, but from structural constraint. AND: The same constraint exists for human rationality.
Whoa there boss, extremely tough for you to casually assume that there is a consistent or complete metascience / metaphysics / metamathematics happening in human realm, but then model it with these impoverished machines that have no metatheoretic access.
This is really sloppy work, I'd encourage you to look deeper into how (eg) HOL models "theories" (roughly corresponding to your idea of "frame") and how they can evolve. There is a HOL-in-HOL autoformalization. This provides a sound basis for considering models of science.
Noncomputability is available in the form of Hilbert's choice, or you can add axioms yourself to capture what notion you think is incomputable.
Basically I don't accept that humans _do_ in fact do a frame jump as loosely gestured at, and I think a more careful modeling of what the hell you mean by that will dissolve the confusion.
Of course I accept that humans are subject to the Goedelian curse, and we are often incoherent, and we're never quite surely when we can stop collecting evidence or updating models based on observation. We are computational.
The claim isn’t that humans maintain a consistent metascience. In fact, quite the opposite. Frame jumps happen precisely because human cognition is not locked into a consistent formal system. That’s the point. It breaks, drifts, mutates. Not elegantly — generatively. You’re pointing to HOL-in-HOL or other meta-theoretical modeling approaches. But these aren’t equivalent. You can model a frame-jump after it has occurred, yes. You can define it retroactively. But that doesn’t make the generative act itself derivable from within the original system. You’re doing what every algorithmic model does: reverse-engineering emergence into a schema that assumes it. This is not sloppiness. It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅. That is a hard constraint. Humans, somehow, do. If you don’t like the label “frame jump,” pick another. But that phenomenon is real, and you can’t dissolve it by saying “well, in HOL I can model this afterward.” If computation is always required to have an external frame to extend itself, then what you’re actually conceding is that self-contained systems can’t self-jump — which is my point exactly...
> It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅
This is trivially false. For any TM with such an alphabet, you can run a program that simulates a TM with an alphabet that includes Σ′.
> Let a semantic frame be defined as Ω = (Σ, R)
But if we let an AGI operate on Ω2 = (English, Science), that semantic frame would have encompassed both Newton and Einstein.
Your argument boils down into one specific and small semantic frame not being general enough to do all of AGI, not that _any_ semantic frame is incapable of AGI.
Your proof only applies to the Newtonian semantic frame. But your claim is that it is true for any semantic frame.
Yes, of course — if you define Ω² as “English + All of Science,” then congratulations, you have defined an unbounded oracle. But you’re just shifting the burden.
No sysem starting from Ω₁ can generate Ω₂ unless Ω₂ is already implicit. ... If you build a system trained on all of science, then yes, it knows Einstein because you gave it Einstein. But now ask it to generate the successor of Ω² (call it Ω³ ) with symbols that don’t yet exist. Can it derive those? No, because they’re not in Σ². Same limitation, new domain. This isn’t about “a small frame can’t do AGI.” It’s about every frame being finite, and therefore bounded in its generative reach. The question is whether any algorithmic system can exeed its own Σ and R. The answer is no. That’s not content-dependent, that’s structural.
None of this is relevant to what I wrote. If anything, they sugget that you don't understand the argument.
If anything, your argument is begging the question - it's a logical fallacy - because your argument rests on humans exceeding the Turing computable, to use human abilities as evidence. But if humans do not exceed the Turing computable, then everything humans can do is evidence that something is Turing computable, and so you can not use human abilities as evidence something isn't Turing computable.
And so your reasoning is trivially circular.
EDIT:
To go into more specific errors, this is fasle:
> Let P be a deterministic Turing machine (TM) operating entirely within Ω.
>
> Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
P can do so by simulating a TM P' whose alphabet includes σ. This is fundamental to the theory of computability, and holds for any two sets of symbols: You can always handle the larger alphabet by simulating one machine on the other.
When your "proof" contains elementary errors like this, it's impossible to take this seriously.
You’re flipping the logic.
I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.
Then I look at real-world examples (Einstein is just one) where new symbols, concepts, and transformation rules appear that were not derivable within the predecessor frame. You can claim, philosophically (!), that “well, humans must be computable, so Einstein’s leap must be too.” Fine. But now you’re asserting that the uncomputable must be computable because humans did it. That’s your circularity, not mine. I don’t claim humans are “super-Turing.” I claim that frame-jumping is not computation. You can still be physical, messy, and bounded .. and generate outside your rational model. That’s all the proof needs.
No, I'm not flipping the logic.
> I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.
Any such "proof" is irrelevant unless you can prove that humans can exceed the Turing computable. If humans can't exceed the Turing computable, then any "proof" that shows limits for algoritmic systems that somehow don't apply to humans must inherently be incorrect.
And so you're sidestepping the issue.
> But now you’re asserting that the uncomputable must be computable because humans did it.
No, you're here demonstrating you failed to understand the argument.
I'm asserting that you cannot use the fact that humans can do something as proof that humans exceed the Turing computable, because if humans do not exceed the Turing computable said "proof" would still give the same result. As such it does not prove anything.
And proving that humans exceed the Turing computable is a necessary precondition for proving AGI impossible.
> I don’t claim humans are “super-Turing.”
Then your claim to prove AGI can't exist is trivially false. For it to be true, you would need to make that claim, and prove it.
That you don't seem to understand this tells me you don't understand the subject.
(See also my edit above; your proof also contains elmentary failures to understand Turing machines)
You’re misreading what I’m doing, and I suspect you’re also misdefining what a “proof” in this space needs to be.
I’m not assuming humans exceed the Turing computable. I’m not using human behavior as a proof of AGI’s impossibility. I’m doing something much more modest - and much more rigorous.
Here’s the actual chain:
1. There’s a formal boundary for algorithmic systems. It’s called symbolic containment. A system defined by a finite symbol set Σ and rule set R cannot generate a successor frame (Σ′, R′) where Σ′ introduces novel symbols not contained in Σ. This is not philosophy — this is structural containment, and it is provable.
2. Then I observe: in human intellectual history, we find recurring examples of frame expansion. Not optimization, not interpolation — expansion. New primitives. New rules. Special relativity didn’t emerge from Newton through deduction. It required symbols and structures that couldn’t be formed inside the original frame.
3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.
4. This leads to a conclusion: if AGI is an algorithmic system (finite symbols, finite rules, formal inference)then it will not be capable of frame jumps.And it is not incapable of that, because it lacks compute. The system is structurally bounded by what it is.
So your complaint that I “haven’t proven humans exceed Turing” is misplaced. I didn’t claim to. You’re asking me to prove something that I simply don’t need to assert .
I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed). Therefore, if humans are purely algorithmic, something’s missing in our understanding of how those systems operate. And if AGI remains within the current algorithmic paradigm, it will not do X. That’s what I’ve shown.
You can still believe humans are Turing machines, fine for me. But if this belief is to be more than some kind of religious statement, then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅. It is you that would need to show how uncomputable concepts emerge from computable substrates without violating containment (->andthat means: witout violating its own logic - as in formal systems, logic and containment end up as the same thing: Your symbol set defines your expressive space, step outside that, and you’re no longer reasoning — you’re redefining the space, the universe you’re reasoning in).
Otherwise, the limitation stands — and the claim that “AGI can do anything humans do” remains an ungrounded leap of faith.
Also: if you believe the only valid proof of AGI impossibility must rest on metaphysical redefinition of humanity as “super-Turing,” then you’ve set an artificial constraint that ensures no such proof could ever exist, no matter the logic.
That’s intellectually trading epistemic rigor for insulation.
As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.
There's nothing rigorous about this. It's pure crackpottery.
As long as you claim to disprove AGI, it inherently follows that you need to prove that humans exceed the Turing computable to succeed. Since you specifically state that you are not trying to prove that humans exceed the Turing computable, you're demonstrating a fundamental lack of understanding of the problem.
> 3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.
This is only true if humans execeed the Turing computable, as otherwise humans are proof that this is something that an algorithmic system can do. So despite claiming that you're not trying to prove that humans execeed the Turing computable, you are making the claim that humans can.
> I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed).
This is a direct statement that you claim that humans are observed to exceed the Turing computable.
> then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅
This is fundamental to Turing equivalence. If there exist any Turing machine that can generate Σ′, then any Turing machine can generate Σ′.
Anything that is possible with any Turing machine, in fact, is possible with a machine with as few as 2 symbols (the smallest (2,3) Turing machine is usually 2 states and 3 symbols, but per Shannon you can always trade states for symbols, and so a (3,2) Turing machine is also possible). This is because you can always simulate an environment where a larger alphabet is encoded with multiple symbols.
> As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.
This is exactly the part that fails.
Any TM can simulate any other, and that by extension, any TM can be extended to any alphabet through simulation.
If you don't understand this, then you don't understand the very basics of Turing Machines.
“Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?"”
Is that not the other way around? “…how long do I have to stay at high speed in order for my younger brother to look as grown up as myself?”
Staying at high speed is symmetric! You'd both appear to age slower from the other's POV. It's only if one brother turns around and comes back, therefore accelerating, that you get an asymmetry.
Indeed. One of my other thoughts here on the Relativity example was "That sets the bar high given most humans can't figure out special relativity even with all the explainers for Einstein's work".
But I'm so used to AGI being conflated with ASI that it didn't seem worth it compared to the more fundamental errors.
Given rcxdude’s reply it appears I am one of those humans who can’t figure out special relativity (let alone general)
Wrt ‘AGI/ASI’, while they’re not the same, after reading Nick Bostrom (and more recently https://ai-2027.com) I hang towards AGI being a blib on the timeline towards ASI. Who knows.