> Basically because physical laws obviously allow more than algorithmic cognition and problem solving.

This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".

Your claim here also goes against the physical interpretation of the Church-Turing thesis.

Without rigorously addressing this, there is no point taking your papers seriously.

No problem here is you proof - although a bit long:

1. THEOREM: Let a semantic frame be defined as Ω = (Σ, R), where

Σ is a finite symbol set and R is a finite set of inference rules.

Let Ω′ = (Σ′, R′) be a candidate successor frame.

Define a frame jump as: Frame Jump Condition: Ω′ extends Ω if Σ′\Σ ≠ ∅ or R′\R ≠ ∅

Let P be a deterministic Turing machine (TM) operating entirely within Ω.

Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.

(Whereas Σ = the set of all finite symbol strings in the frame; derivable outputs are formed from Σ under the inference rules R.)

Proof Sketch: P’s tape alphabet is fixed to Σ and symbols derived from Σ. By induction, no computation step can introduce a symbol not already in Σ. ∎

2. APPLICATION: Newton → Special Relativity

Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame) Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)

Let φ = “The speed of light is invariant in all inertial frames.” Let Tᴿ be the theory of special relativity. Let Pᴺ be a TM constrained to Σᴺ.

By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.

But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ

→ Therefore Pᴺ ⊬ φ → Tᴿ ⊈ L(Pᴺ)

Thus:

Special Relativity cannot be derived from Newtonian physics within its original formal frame.

3. EMPIRICAL CONFLICT Let: Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t) Axiom N₂: Ether model for light speed Data D: Michelson–Morley ⇒ c = const

In Ωᴺ, combining N₁ and N₂ with D leads to contradiction. Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ But by Lemma 1: impossible within Pᴺ. -> Frame must be exited to resolve data.

4. FRAME JUMP OBSERVATION

Einstein introduced Σᴿ — a new frame with new symbols and transformation rules. He did so without derivation from within Ωᴺ. That constitutes a frame jump.

5. FINALLY

A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅

B: Einstein was human

C: Therefore, humans can initiate frame jumps (i.e., generate formal systems containing symbols/rules not computable within the original system).

Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps. But human cognition demonstrably can.

QED.

BUT: Can Humans COMPUTE those functions? (As you asked)

-> Answer: a) No - because frame-jumping is not a computation.

It’s a generative act that lies outside the scope of computational derivation. Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.

In each case, the cognitive system fails not from error, but from structural constraint. AND: The same constraint exists for human rationality.

Whoa there boss, extremely tough for you to casually assume that there is a consistent or complete metascience / metaphysics / metamathematics happening in human realm, but then model it with these impoverished machines that have no metatheoretic access.

This is really sloppy work, I'd encourage you to look deeper into how (eg) HOL models "theories" (roughly corresponding to your idea of "frame") and how they can evolve. There is a HOL-in-HOL autoformalization. This provides a sound basis for considering models of science.

Noncomputability is available in the form of Hilbert's choice, or you can add axioms yourself to capture what notion you think is incomputable.

Basically I don't accept that humans _do_ in fact do a frame jump as loosely gestured at, and I think a more careful modeling of what the hell you mean by that will dissolve the confusion.

Of course I accept that humans are subject to the Goedelian curse, and we are often incoherent, and we're never quite surely when we can stop collecting evidence or updating models based on observation. We are computational.

The claim isn’t that humans maintain a consistent metascience. In fact, quite the opposite. Frame jumps happen precisely because human cognition is not locked into a consistent formal system. That’s the point. It breaks, drifts, mutates. Not elegantly — generatively. You’re pointing to HOL-in-HOL or other meta-theoretical modeling approaches. But these aren’t equivalent. You can model a frame-jump after it has occurred, yes. You can define it retroactively. But that doesn’t make the generative act itself derivable from within the original system. You’re doing what every algorithmic model does: reverse-engineering emergence into a schema that assumes it. This is not sloppiness. It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅. That is a hard constraint. Humans, somehow, do. If you don’t like the label “frame jump,” pick another. But that phenomenon is real, and you can’t dissolve it by saying “well, in HOL I can model this afterward.” If computation is always required to have an external frame to extend itself, then what you’re actually conceding is that self-contained systems can’t self-jump — which is my point exactly...

> It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅

This is trivially false. For any TM with such an alphabet, you can run a program that simulates a TM with an alphabet that includes Σ′.

> Let a semantic frame be defined as Ω = (Σ, R)

But if we let an AGI operate on Ω2 = (English, Science), that semantic frame would have encompassed both Newton and Einstein.

Your argument boils down into one specific and small semantic frame not being general enough to do all of AGI, not that _any_ semantic frame is incapable of AGI.

Your proof only applies to the Newtonian semantic frame. But your claim is that it is true for any semantic frame.

Yes, of course — if you define Ω² as “English + All of Science,” then congratulations, you have defined an unbounded oracle. But you’re just shifting the burden.

No sysem starting from Ω₁ can generate Ω₂ unless Ω₂ is already implicit. ... If you build a system trained on all of science, then yes, it knows Einstein because you gave it Einstein. But now ask it to generate the successor of Ω² (call it Ω³ ) with symbols that don’t yet exist. Can it derive those? No, because they’re not in Σ². Same limitation, new domain. This isn’t about “a small frame can’t do AGI.” It’s about every frame being finite, and therefore bounded in its generative reach. The question is whether any algorithmic system can exeed its own Σ and R. The answer is no. That’s not content-dependent, that’s structural.

None of this is relevant to what I wrote. If anything, they sugget that you don't understand the argument.

If anything, your argument is begging the question - it's a logical fallacy - because your argument rests on humans exceeding the Turing computable, to use human abilities as evidence. But if humans do not exceed the Turing computable, then everything humans can do is evidence that something is Turing computable, and so you can not use human abilities as evidence something isn't Turing computable.

And so your reasoning is trivially circular.

EDIT:

To go into more specific errors, this is fasle:

> Let P be a deterministic Turing machine (TM) operating entirely within Ω.

>

> Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.

P can do so by simulating a TM P' whose alphabet includes σ. This is fundamental to the theory of computability, and holds for any two sets of symbols: You can always handle the larger alphabet by simulating one machine on the other.

When your "proof" contains elementary errors like this, it's impossible to take this seriously.

You’re flipping the logic.

I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.

Then I look at real-world examples (Einstein is just one) where new symbols, concepts, and transformation rules appear that were not derivable within the predecessor frame. You can claim, philosophically (!), that “well, humans must be computable, so Einstein’s leap must be too.” Fine. But now you’re asserting that the uncomputable must be computable because humans did it. That’s your circularity, not mine. I don’t claim humans are “super-Turing.” I claim that frame-jumping is not computation. You can still be physical, messy, and bounded .. and generate outside your rational model. That’s all the proof needs.

No, I'm not flipping the logic.

> I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.

Any such "proof" is irrelevant unless you can prove that humans can exceed the Turing computable. If humans can't exceed the Turing computable, then any "proof" that shows limits for algoritmic systems that somehow don't apply to humans must inherently be incorrect.

And so you're sidestepping the issue.

> But now you’re asserting that the uncomputable must be computable because humans did it.

No, you're here demonstrating you failed to understand the argument.

I'm asserting that you cannot use the fact that humans can do something as proof that humans exceed the Turing computable, because if humans do not exceed the Turing computable said "proof" would still give the same result. As such it does not prove anything.

And proving that humans exceed the Turing computable is a necessary precondition for proving AGI impossible.

> I don’t claim humans are “super-Turing.”

Then your claim to prove AGI can't exist is trivially false. For it to be true, you would need to make that claim, and prove it.

That you don't seem to understand this tells me you don't understand the subject.

(See also my edit above; your proof also contains elmentary failures to understand Turing machines)