Well, it in fact depends on what intelligence is to your understanding:
-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.
- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.
- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.
The main point is: neither algorithms nor rationality can point beyond itself.
In other words: You cannot think out of the box - thinking IS the box.
(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)
Let me steal another users alternate phrasing: Since humans and computers are both bound by the same physical laws, why does your proof not apply to humans?
Why?
1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving.
(And also: I am bound by thermodynamics as my mother in Law is, still i get disarranged by her mere presence while I always have to put laxatives in her wine to counter that)
2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.
3. Physical laws - where do they really come from?
From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable?
I honestly don’t know.
In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.
Imagine little Albert asking his physics teacher in 1880:
"Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"
> Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving.
You seem to be laboring under the mistaken idea that "algorithmic" does not encompass everything allowed by physics. But, humoring this idea, then if physical laws allow it, why can this "more than algorithmic" cognition not be done artificially? As you say - we can obviously do it. What magical line is preventing an artificial system from doing the same?
The lack of mention of the Church-Turing thesis in both papers suggest he hasn't even considered that angle.
But it is the fundamental objection he would need to overcome.
There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.
> Basically because physical laws obviously allow more than algorithmic cognition and problem solving.
This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".
Your claim here also goes against the physical interpretation of the Church-Turing thesis.
Without rigorously addressing this, there is no point taking your papers seriously.
No problem here is you proof - although a bit long:
1. THEOREM:
Let a semantic frame be defined as
Ω = (Σ, R), where
Σ is a finite symbol set and
R is a finite set of inference rules.
Let Ω′ = (Σ′, R′) be a candidate successor frame.
Define a frame jump as:
Frame Jump Condition: Ω′ extends Ω if
Σ′\Σ ≠ ∅ or
R′\R ≠ ∅
Let P be a deterministic Turing machine (TM) operating entirely within Ω.
Then:
Lemma 1 (Symbol Containment):
For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
(Whereas Σ = the set of all finite symbol strings in the frame; derivable
outputs are formed from Σ under the inference rules R.)
Proof Sketch:
P’s tape alphabet is fixed to Σ and symbols derived from Σ.
By induction, no computation step can introduce a symbol not already in Σ.
∎
2. APPLICATION: Newton → Special Relativity
Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame)
Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)
Let φ = “The speed of light is invariant in all inertial frames.”
Let Tᴿ be the theory of special relativity.
Let Pᴺ be a TM constrained to Σᴺ.
By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.
But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ
→ Therefore Pᴺ ⊬ φ
→ Tᴿ ⊈ L(Pᴺ)
Thus:
Special Relativity cannot be derived from Newtonian physics within its original formal frame.
3. EMPIRICAL CONFLICT
Let:
Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t)
Axiom N₂: Ether model for light speed
Data D: Michelson–Morley ⇒ c = const
In Ωᴺ, combining N₁ and N₂ with D leads to contradiction.
Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ
But by Lemma 1: impossible within Pᴺ.
-> Frame must be exited to resolve data.
4. FRAME JUMP OBSERVATION
Einstein introduced Σᴿ — a new frame with new symbols and transformation rules.
He did so without derivation from within Ωᴺ.
That constitutes a frame jump.
5. FINALLY
A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅
B: Einstein was human
C: Therefore, humans can initiate frame jumps
(i.e., generate formal systems containing symbols/rules not computable
within the original system).
Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps.
But human cognition demonstrably can.
QED.
BUT:
Can Humans COMPUTE those functions? (As you asked)
-> Answer: a) No - because frame-jumping is not a computation.
It’s a generative act that lies outside the scope of computational derivation.
Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.
In each case, the cognitive system fails not from error, but from structural constraint.
AND: The same constraint exists for human rationality.
Whoa there boss, extremely tough for you to casually assume that there is a consistent or complete metascience / metaphysics / metamathematics happening in human realm, but then model it with these impoverished machines that have no metatheoretic access.
This is really sloppy work, I'd encourage you to look deeper into how (eg) HOL models "theories" (roughly corresponding to your idea of "frame") and how they can evolve. There is a HOL-in-HOL autoformalization. This provides a sound basis for considering models of science.
Noncomputability is available in the form of Hilbert's choice, or you can add axioms yourself to capture what notion you think is incomputable.
Basically I don't accept that humans _do_ in fact do a frame jump as loosely gestured at, and I think a more careful modeling of what the hell you mean by that will dissolve the confusion.
Of course I accept that humans are subject to the Goedelian curse, and we are often incoherent, and we're never quite surely when we can stop collecting evidence or updating models based on observation. We are computational.
The claim isn’t that humans maintain a consistent metascience. In fact, quite the opposite. Frame jumps happen precisely because human cognition is not locked into a consistent formal system. That’s the point. It breaks, drifts, mutates. Not elegantly — generatively.
You’re pointing to HOL-in-HOL or other meta-theoretical modeling approaches. But these aren’t equivalent. You can model a frame-jump after it has occurred, yes. You can define it retroactively. But that doesn’t make the generative act itself derivable from within the original system. You’re doing what every algorithmic model does: reverse-engineering emergence into a schema that assumes it.
This is not sloppiness. It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅. That is a hard constraint. Humans, somehow, do. If you don’t like the label “frame jump,” pick another. But that phenomenon is real, and you can’t dissolve it by saying “well, in HOL I can model this afterward.”
If computation is always required to have an external frame to extend itself, then what you’re actually conceding is that self-contained systems can’t self-jump — which is my point exactly...
But if we let an AGI operate on Ω2 = (English, Science), that semantic frame would have encompassed both Newton and Einstein.
Your argument boils down into one specific and small semantic frame not being general enough to do all of AGI, not that _any_ semantic frame is incapable of AGI.
Your proof only applies to the Newtonian semantic frame. But your claim is that it is true for any semantic frame.
Yes, of course — if you define Ω² as “English + All of Science,” then congratulations, you have defined an unbounded oracle. But you’re just shifting the burden.
No sysem starting from Ω₁ can generate Ω₂ unless Ω₂ is already implicit. ... If you build a system trained on all of science, then yes, it knows Einstein because you gave it Einstein.
But now ask it to generate the successor of Ω² (call it Ω³ ) with symbols that don’t yet exist. Can it derive those? No, because they’re not in Σ². Same limitation, new domain.
This isn’t about “a small frame can’t do AGI.” It’s about every frame being finite, and therefore bounded in its generative reach. The question is whether any algorithmic system can exeed its own Σ and R. The answer is no. That’s not content-dependent, that’s structural.
None of this is relevant to what I wrote. If anything, they sugget that you don't understand the argument.
If anything, your argument is begging the question - it's a logical fallacy - because your argument rests on humans exceeding the Turing computable, to use human abilities as evidence. But if humans do not exceed the Turing computable, then everything humans can do is evidence that something is Turing computable, and so you can not use human abilities as evidence something isn't Turing computable.
And so your reasoning is trivially circular.
EDIT:
To go into more specific errors, this is fasle:
> Let P be a deterministic Turing machine (TM) operating entirely within Ω.
>
> Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
P can do so by simulating a TM P' whose alphabet includes σ.
This is fundamental to the theory of computability, and holds for any two sets of symbols: You can always handle the larger alphabet by simulating one machine on the other.
When your "proof" contains elementary errors like this, it's impossible to take this seriously.
I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.
Then I look at real-world examples (Einstein is just one) where new symbols, concepts, and transformation rules appear that were not derivable within the predecessor frame.
You can claim, philosophically (!), that “well, humans must be computable, so Einstein’s leap must be too.” Fine. But now you’re asserting that the uncomputable must be computable because humans did it. That’s your circularity, not mine.
I don’t claim humans are “super-Turing.” I claim that frame-jumping is not computation. You can still be physical, messy, and bounded .. and generate outside your rational model. That’s all the proof needs.
> I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.
Any such "proof" is irrelevant unless you can prove that humans can exceed the Turing computable. If humans can't exceed the Turing computable, then any "proof" that shows limits for algoritmic systems that somehow don't apply to humans must inherently be incorrect.
And so you're sidestepping the issue.
> But now you’re asserting that the uncomputable must be computable because humans did it.
No, you're here demonstrating you failed to understand the argument.
I'm asserting that you cannot use the fact that humans can do something as proof that humans exceed the Turing computable, because if humans do not exceed the Turing computable said "proof" would still give the same result. As such it does not prove anything.
And proving that humans exceed the Turing computable is a necessary precondition for proving AGI impossible.
> I don’t claim humans are “super-Turing.”
Then your claim to prove AGI can't exist is trivially false. For it to be true, you would need to make that claim, and prove it.
That you don't seem to understand this tells me you don't understand the subject.
(See also my edit above; your proof also contains elmentary failures to understand Turing machines)
You’re misreading what I’m doing, and I suspect you’re also misdefining what a “proof” in this space needs to be.
I’m not assuming humans exceed the Turing computable. I’m not using human behavior as a proof of AGI’s impossibility. I’m doing something much more modest - and much more rigorous.
Here’s the actual chain:
1.
There’s a formal boundary for algorithmic systems. It’s called symbolic containment. A system defined by a finite symbol set Σ and rule set R cannot generate a successor frame (Σ′, R′) where Σ′ introduces novel symbols not contained in Σ. This is not philosophy — this is structural containment, and it is provable.
2.
Then I observe: in human intellectual history, we find recurring examples of frame expansion. Not optimization, not interpolation — expansion. New primitives. New rules. Special relativity didn’t emerge from Newton through deduction. It required symbols and structures that couldn’t be formed inside the original frame.
3.
That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.
4.
This leads to a conclusion: if AGI is an algorithmic system (finite symbols, finite rules, formal inference)then it will not be capable of frame jumps.And it is not incapable of that, because it lacks compute. The system is structurally bounded by what it is.
So your complaint that I “haven’t proven humans exceed Turing” is misplaced. I didn’t claim to. You’re asking me to prove something that I simply don’t need to assert .
I’m saying:
algorithmic systems can’t do X (provable), and humans appear to do X (observed).
Therefore, if humans are purely algorithmic, something’s missing in our understanding of how those systems operate.
And if AGI remains within the current algorithmic paradigm, it will not do X. That’s what I’ve shown.
You can still believe humans are Turing machines, fine for me.
But if this belief is to be more than some kind of religious statement, then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅.
It is you that would need to show how uncomputable concepts emerge from computable substrates without violating containment
(->andthat means: witout violating its own logic - as in formal systems, logic and containment end up as the same thing: Your symbol set defines your expressive space, step outside that, and you’re no longer reasoning — you’re redefining the space, the universe you’re reasoning in).
Otherwise, the limitation stands — and the claim that “AGI can do anything humans do” remains an ungrounded leap of faith.
Also:
if you believe the only valid proof of AGI impossibility must rest on metaphysical redefinition of humanity as “super-Turing,” then you’ve set an artificial constraint that ensures no such proof could ever exist, no matter the logic.
That’s intellectually trading epistemic rigor for insulation.
As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails.
The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.
There's nothing rigorous about this. It's pure crackpottery.
As long as you claim to disprove AGI, it inherently follows that you need to prove that humans exceed the Turing computable to succeed. Since you specifically state that you are not trying to prove that humans exceed the Turing computable, you're demonstrating a fundamental lack of understanding of the problem.
> 3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.
This is only true if humans execeed the Turing computable, as otherwise humans are proof that this is something that an algorithmic system can do. So despite claiming that you're not trying to prove that humans execeed the Turing computable, you are making the claim that humans can.
> I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed).
This is a direct statement that you claim that humans are observed to exceed the Turing computable.
> then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅
This is fundamental to Turing equivalence. If there exist any Turing machine that can generate Σ′, then any Turing machine can generate Σ′.
Anything that is possible with any Turing machine, in fact, is possible with a machine with as few as 2 symbols (the smallest (2,3) Turing machine is usually 2 states and 3 symbols, but per Shannon you can always trade states for symbols, and so a (3,2) Turing machine is also possible). This is because you can always simulate an environment where a larger alphabet is encoded with multiple symbols.
> As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.
This is exactly the part that fails.
Any TM can simulate any other, and that by extension, any TM can be extended to any alphabet through simulation.
If you don't understand this, then you don't understand the very basics of Turing Machines.
“Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?"”
Is that not the other way around?
“…how long do I have to stay at high speed in order for my younger brother to look as grown up as myself?”
Staying at high speed is symmetric! You'd both appear to age slower from the other's POV. It's only if one brother turns around and comes back, therefore accelerating, that you get an asymmetry.
Indeed. One of my other thoughts here on the Relativity example was "That sets the bar high given most humans can't figure out special relativity even with all the explainers for Einstein's work".
But I'm so used to AGI being conflated with ASI that it didn't seem worth it compared to the more fundamental errors.
Given rcxdude’s reply it appears I am one of those humans who can’t figure out special relativity (let alone general)
Wrt ‘AGI/ASI’, while they’re not the same, after reading Nick Bostrom (and more recently https://ai-2027.com) I hang towards AGI being a blib on the timeline towards ASI. Who knows.
Humans are fallible in a way computers are not. One could argue any creative process is an exercise in fallibility.
More interestingly, humans are capable of assessing the results of their "neural misfires" ("hmm, there's something to this"), whereas even if we could make a computer do such mistakes, it wouldn't know its Penny Lane from its Daddy's Car[0], even if it managed to come up with one.
Hang on, hasn't everyone spent the past few years complaining about LLMs and diffusion models being very fallible?
And we can get LLMs to do better by just prompting them to "think step by step" or replacing the first ten attempts to output a "stop" symbolic token with the token for "Wait… "?
This paper is about the limits in current systems.
Ai currently has issues with seeing what's missing. Seeing the negative space.
When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.
Basically the ability to say, "this has stopped making sense" and stop or change approach.
Also, we clearly do path exploration and semantic compression in our sleep.
We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).
Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.
I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.
We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.
There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.
I guess so too... but whatever it is: it cannot possibly be something algorithmic. Therefore it doesn't matter in terms of demonstrating that AI has a boundary there, that cannot be transcended by tech, compute, training, data etc.
Explain what you mean by "algorithm" and "algorithmic". Be very precise. You are using this vague word to hinge on your entire argument and it is necessary you explain first what it means. Since from reading your replies here it is clear you are laboring under a defitnition of "algorithm" quite different from the accepted one.
Why can't it be algorithmic?
If the brain uses the same process on all information, then that is an algorithmic process. There is some evidence that it does do the same process to do things like consolidating information, processing the "world model" and so on.
Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.
Why do you think humans are capable of doing anything that isn't algoritmic?
This statement, and your lack of mention of the Church-Turing thesis in your papers suggests you're using a non-standard definition of "algoritmic", and your argument rests on it.
Humans use soul juice to connect to the understandome. Machines can't connect to the understandome because of Gödels incompleteness, they can only make relationships between tokens. Not map them to reality like we can via magic.
First of all, math isn’t real any more than language isn’t real. It’s an entirely human construct, so it’s possible you cannot reach AGI using mathematical means, as math might not be able to fully express it. It’s similar to how language cannot fully describe what a color is, only vague approximations and measurements. If you wanted to create the color green, you cannot do it by describing various properties, you must create the actual green somehow.
I don't think it would be unfair to accept the brain state of green as an accurate representation of green for all intents and purposes.
Similar to how "computer code" and "video game world" are the same thing. Everything in the video game world is perfectly encoded in the programming. There is nothing transcendent happening, it's two different views of the same core object.
Fair enough. But then, AGI wouldn't really be based on math, but on physics. Why would an artificially-constructed physical system have (fundamentally) different capabilities than a natural one?
My take is that it transcends any science that we'll understand and harness in the lifetime of anyone living today. It for all intents and purposes transcends science from our point of view, but not necessarily in principle.
Noted /s, but truly this is why I think even current models are already more disruptive than naysayers are willing to accept that any future model ever could be.
I'm noting the high frequency of think pieces from said naysayers. It's every day now: they're all furiously writing about flaws and limitations and extrapolating these to unjustifiable conclusions, predicting massive investment failures (inevitable, and irrelevant,) arguing AGI is impossible with no falsifiable evidence, etc.
Humans do a lot of things that computers don't, such as be born, age (verb), die, get hungry, fall in love, reproduce, and more. Computers can only metaphorically do these things, human learning is correlated with all of them, and we don't confidently know how. Have some humility.
TFA presents an information-theoretic argument forAGI being impossible. My reading of your parent commenter is that they are asking why this argument does not also apply to humans.
You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).
We don’t even know how LLMs work. But we do know the underlying mechanisms are governed by math because we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.
So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.
So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.
> we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.
> So because of this we know reality is governed by maths.
That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.
Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.
> What does humility have to do with anything?
Not the GP but I think humility is kinda relevant here.
>That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.
Let me repharse it. As far as we know all of reality is governed by the principles of logic and therefore math. This is the most likely possibility and we have based all of our technology and culture and science around this. It is the fundamental assumption humanity has made on reality. We cannot consistently demonstrate disproof against this assumption.
>Not the GP but I think humility is kinda relevant here.
How so? If I assume all of reality is governed by math, but you don't. How does that make me not humble but you humble? Seems personal.
I guess it's kinda hubris on my part to question your ability to know things with such high certainty about things that philosophers have been struggling to prove for millenia...
What you said is only true for the bits of humanity you have decided to focus upon -- capitalist, technology-driven modern societies. If you looked beyond that, there are cultures that build society upon other assumptions. You might think those other modes are "wrong", but that's your personal view. For me, I personally don't think any of these are "true" in the absolute sense, as much as I don't think yours is "true". They're just ways humans with our mortal brains try to grapple with a reality that we don't understand.
As a sidenote, probability does not mean the thing you think it means. There's no reasonable frequentist interpretation for fundamental truth of reality, so you're just saying your Bayesian subjective probability says that math is "the most likely possibility". Which is fine, except everyone has their own different priors...
I never made a claim for absolute truth. I said it’s the most likely truth given the fact that you get up every morning and drive a car or turn on your computer and assume everything will work. Because we all assume it, we assume all of logic behind it to be true as well.
Whatever probability is, whatever philosophers say about it any of this it doesn’t matter. You act like all of it is true including the usage of the web technology that allows you to post your idea here. You are acting as if all the logic, science and technology that was involved in the creation of that web technology is real and thus I am simply saying because the entire world claims this assumption by action then my claim is inline with the entire world.
You can make a philosophical argument but your actions aren’t inline with that. You may say no one can prove math or probability to be real but you certainly don’t live your life that way. You don’t think that science logic and technology will suddenly fall apart and not work when oh turn on your computer. In fact you live your life as if those things are fundamentally true. Yet you talk as if they might not be.
Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.
We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.
> Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.
If we're OK with descriptions so lossy that they fit in a sentence, we also understand the human brain:
A electrochemical network with external inputs and some feedback loops, pumping ions around to trigger voltage cascades to create muscle contractions as outputs.
Yes. As long as we're confident in our definitions, that makes the questions easy. Is that the same as a feedforward algorithm inferring static weights to create a tokenized response string? Do you necessarily need an electrochemical network with external stimuli and feedback to generate legible text?
No. The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.
> The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.
That "can" should be "could", else it presumes too much.
For both human brains and surprisingly small ANNs, far smaller than LLMs, humanity collectively does not yet know the defining characteristics of the aspects we care about.
I mean, humanity don't agree with itself what any of the three initials of AGI mean, there's 40 definitions of the word "consciousness", there are arguments about if there is either exactly one or many independent G-factors in human IQ scores, and also if those scores mean anything beyond correlating with school grades, and human nerodivergence covers various real states of existance that many of us find incomprehensible (sonetimes mutually, see e.g. most discussions where aphantasia comes up).
The main reason I expect little from an AI is that we don't know what we're doing. The main reason I can't just assume the least is because neither did evolution when we popped out.
The fact that it doesn't operate identically or even similarly on the physical layer doesn't mean that similar processes cannot emerge on higher levels of abstraction.
In that video above George Hinton, directly says we don't understand how it works.
So I don't speak just for myself. I speak for the person who ushered in the AI revolution, I speak for Experts in the field who know what they're talking aboutt. I don't speak for people who don't know what they're talking about.
Even though we know it's a feedforward network and we know how the queries are tokenized you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
Don't try to just argue with me. Argue with the experts. Argue with the people who know more than you, Hinton.
Hinton invented the neural network, which is not the same as the transformer architecture used in LLMs. Asking him about LLM architectures is like asking Henry Ford if he can build a car from a bunch of scrap metal; of course he can't. He might understand the engine or the bodywork, but it's not his job to know the whole process. Nor is it Hinton's.
And that's okay - his humility isn't holding anyone back here. I'm not claiming to have memorized every model weight ever published, either. But saying that we don't know how AI works is empirically false; AI genuinely wouldn't exist if we weren't able to interpret and improve upon the transformer architecture. Your statement here is a dangerous extrapolation.
> you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
You'd think this, but it's actually wrong. If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time. It's a lot of math, but it's wholly possible to compute exactly what AI would say ahead of time if you can solve for the non-deterministic seeded entropy, or remove it entirely.
LLM weights and tokenizer are both always idempotent, the inference software often introduces variability for more varied responses. Just so we're on the same page here.
> If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time.
That answers the "what", but not the "why" nor the "how exactly", with the latter being crucial to any claim that we understand how these things actually work.
If we actually did understand that, we wouldn't need to throw terabytes of data on them to train them - we'd just derive that very equation directly. Or, at the very least, we would know how to do so in principle. But we don't.
> But saying that we don't know how AI works is empirically false;
Your statement completely contradicts hintons statement. You didn’t even address his point. Basically you’re saying Hinton is wrong and you know better than him. If so, counter his argument don’t restate your argument in the form of an analogy.
> You'd think this, but it's actually wrong.
No you’re just trying to twist what I’m saying into something that’s wrong. First I never said it’s not deterministic. All computers are deterministic, even RNGs. I’m saying we have no theory about it. A plane for example you can predict its motion via a theory. The theory allows us to understand and control an airplane and predict its motion. We have nothing for an LLM. No theory that helps us predict, no theory that helps us fully control and no theory that helps us understand it beyond the high level abstraction of a best fit curve in multidimensional space. All we have is an algorithm that allows an LLM to self assemble as a side effect from emergent effects.
Rest assured I understand the transformer as much as you do (which is to say humanity has limited understanding of it) you don’t need to assume I’m just going off hintons statements. He and I knows and understands LLMs as much as you even though we didnt invent it. Please address what I said and what he said with a counter argument and not an analogy that just reiterates an identical point.
We understand and build the trellis that the LLMs "grow" on. We don't have good insight into how a fully grown LLM actually turns any specific input into any specific output. We can follow it through the network, but it's a totally senseless noisy mess.
"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.
(This is an illustrative example made for easy understanding, not something I specifically went and compared)
We have the Navier–Stokes equations which fit on a matchbox, yet for the last 25 years there's been a US$1,000,000 prize on offer to the first person providing a solution for a specific statement of the problem:
Prove or give a counter-example of the following statement:
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
I don't see how it will convince anyone: people said as much before chess, then again about Go, and are still currently disagreeing with each other if LLMs do or don't pass the Turing test.
Irregardless, this was to demonstrate by analogy that things that seem simple can actually be really hard to fully understand.
The above is a video clip of Hinton basically contradicting what you’re saying.
So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.
So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.
Hi and thanks for engaging :-)
Well, it in fact depends on what intelligence is to your understanding:
-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.
- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.
- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.
The main point is: neither algorithms nor rationality can point beyond itself.
In other words: You cannot think out of the box - thinking IS the box.
(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)
Let me steal another users alternate phrasing: Since humans and computers are both bound by the same physical laws, why does your proof not apply to humans?
Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving. (And also: I am bound by thermodynamics as my mother in Law is, still i get disarranged by her mere presence while I always have to put laxatives in her wine to counter that)
2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.
3. Physical laws - where do they really come from? From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable? I honestly don’t know.
In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.
Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"
> Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving.
You seem to be laboring under the mistaken idea that "algorithmic" does not encompass everything allowed by physics. But, humoring this idea, then if physical laws allow it, why can this "more than algorithmic" cognition not be done artificially? As you say - we can obviously do it. What magical line is preventing an artificial system from doing the same?
If by algorithmic you just mean anything that a Turing machine can do, then your theorem is asserting that the Church-Turing thesis isn't true.
Why not use that as the title of your paper? That a more fundamental claim.
The lack of mention of the Church-Turing thesis in both papers suggest he hasn't even considered that angle.
But it is the fundamental objection he would need to overcome.
There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.
> Basically because physical laws obviously allow more than algorithmic cognition and problem solving.
This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".
Your claim here also goes against the physical interpretation of the Church-Turing thesis.
Without rigorously addressing this, there is no point taking your papers seriously.
No problem here is you proof - although a bit long:
1. THEOREM: Let a semantic frame be defined as Ω = (Σ, R), where
Σ is a finite symbol set and R is a finite set of inference rules.
Let Ω′ = (Σ′, R′) be a candidate successor frame.
Define a frame jump as: Frame Jump Condition: Ω′ extends Ω if Σ′\Σ ≠ ∅ or R′\R ≠ ∅
Let P be a deterministic Turing machine (TM) operating entirely within Ω.
Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
(Whereas Σ = the set of all finite symbol strings in the frame; derivable outputs are formed from Σ under the inference rules R.)
Proof Sketch: P’s tape alphabet is fixed to Σ and symbols derived from Σ. By induction, no computation step can introduce a symbol not already in Σ. ∎
2. APPLICATION: Newton → Special Relativity
Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame) Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)
Let φ = “The speed of light is invariant in all inertial frames.” Let Tᴿ be the theory of special relativity. Let Pᴺ be a TM constrained to Σᴺ.
By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.
But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ
→ Therefore Pᴺ ⊬ φ → Tᴿ ⊈ L(Pᴺ)
Thus:
Special Relativity cannot be derived from Newtonian physics within its original formal frame.
3. EMPIRICAL CONFLICT Let: Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t) Axiom N₂: Ether model for light speed Data D: Michelson–Morley ⇒ c = const
In Ωᴺ, combining N₁ and N₂ with D leads to contradiction. Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ But by Lemma 1: impossible within Pᴺ. -> Frame must be exited to resolve data.
4. FRAME JUMP OBSERVATION
Einstein introduced Σᴿ — a new frame with new symbols and transformation rules. He did so without derivation from within Ωᴺ. That constitutes a frame jump.
5. FINALLY
A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅
B: Einstein was human
C: Therefore, humans can initiate frame jumps (i.e., generate formal systems containing symbols/rules not computable within the original system).
Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps. But human cognition demonstrably can.
QED.
BUT: Can Humans COMPUTE those functions? (As you asked)
-> Answer: a) No - because frame-jumping is not a computation.
It’s a generative act that lies outside the scope of computational derivation. Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.
In each case, the cognitive system fails not from error, but from structural constraint. AND: The same constraint exists for human rationality.
Whoa there boss, extremely tough for you to casually assume that there is a consistent or complete metascience / metaphysics / metamathematics happening in human realm, but then model it with these impoverished machines that have no metatheoretic access.
This is really sloppy work, I'd encourage you to look deeper into how (eg) HOL models "theories" (roughly corresponding to your idea of "frame") and how they can evolve. There is a HOL-in-HOL autoformalization. This provides a sound basis for considering models of science.
Noncomputability is available in the form of Hilbert's choice, or you can add axioms yourself to capture what notion you think is incomputable.
Basically I don't accept that humans _do_ in fact do a frame jump as loosely gestured at, and I think a more careful modeling of what the hell you mean by that will dissolve the confusion.
Of course I accept that humans are subject to the Goedelian curse, and we are often incoherent, and we're never quite surely when we can stop collecting evidence or updating models based on observation. We are computational.
The claim isn’t that humans maintain a consistent metascience. In fact, quite the opposite. Frame jumps happen precisely because human cognition is not locked into a consistent formal system. That’s the point. It breaks, drifts, mutates. Not elegantly — generatively. You’re pointing to HOL-in-HOL or other meta-theoretical modeling approaches. But these aren’t equivalent. You can model a frame-jump after it has occurred, yes. You can define it retroactively. But that doesn’t make the generative act itself derivable from within the original system. You’re doing what every algorithmic model does: reverse-engineering emergence into a schema that assumes it. This is not sloppiness. It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅. That is a hard constraint. Humans, somehow, do. If you don’t like the label “frame jump,” pick another. But that phenomenon is real, and you can’t dissolve it by saying “well, in HOL I can model this afterward.” If computation is always required to have an external frame to extend itself, then what you’re actually conceding is that self-contained systems can’t self-jump — which is my point exactly...
> It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅
This is trivially false. For any TM with such an alphabet, you can run a program that simulates a TM with an alphabet that includes Σ′.
> Let a semantic frame be defined as Ω = (Σ, R)
But if we let an AGI operate on Ω2 = (English, Science), that semantic frame would have encompassed both Newton and Einstein.
Your argument boils down into one specific and small semantic frame not being general enough to do all of AGI, not that _any_ semantic frame is incapable of AGI.
Your proof only applies to the Newtonian semantic frame. But your claim is that it is true for any semantic frame.
Yes, of course — if you define Ω² as “English + All of Science,” then congratulations, you have defined an unbounded oracle. But you’re just shifting the burden.
No sysem starting from Ω₁ can generate Ω₂ unless Ω₂ is already implicit. ... If you build a system trained on all of science, then yes, it knows Einstein because you gave it Einstein. But now ask it to generate the successor of Ω² (call it Ω³ ) with symbols that don’t yet exist. Can it derive those? No, because they’re not in Σ². Same limitation, new domain. This isn’t about “a small frame can’t do AGI.” It’s about every frame being finite, and therefore bounded in its generative reach. The question is whether any algorithmic system can exeed its own Σ and R. The answer is no. That’s not content-dependent, that’s structural.
None of this is relevant to what I wrote. If anything, they sugget that you don't understand the argument.
If anything, your argument is begging the question - it's a logical fallacy - because your argument rests on humans exceeding the Turing computable, to use human abilities as evidence. But if humans do not exceed the Turing computable, then everything humans can do is evidence that something is Turing computable, and so you can not use human abilities as evidence something isn't Turing computable.
And so your reasoning is trivially circular.
EDIT:
To go into more specific errors, this is fasle:
> Let P be a deterministic Turing machine (TM) operating entirely within Ω.
>
> Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.
P can do so by simulating a TM P' whose alphabet includes σ. This is fundamental to the theory of computability, and holds for any two sets of symbols: You can always handle the larger alphabet by simulating one machine on the other.
When your "proof" contains elementary errors like this, it's impossible to take this seriously.
You’re flipping the logic.
I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.
Then I look at real-world examples (Einstein is just one) where new symbols, concepts, and transformation rules appear that were not derivable within the predecessor frame. You can claim, philosophically (!), that “well, humans must be computable, so Einstein’s leap must be too.” Fine. But now you’re asserting that the uncomputable must be computable because humans did it. That’s your circularity, not mine. I don’t claim humans are “super-Turing.” I claim that frame-jumping is not computation. You can still be physical, messy, and bounded .. and generate outside your rational model. That’s all the proof needs.
No, I'm not flipping the logic.
> I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.
Any such "proof" is irrelevant unless you can prove that humans can exceed the Turing computable. If humans can't exceed the Turing computable, then any "proof" that shows limits for algoritmic systems that somehow don't apply to humans must inherently be incorrect.
And so you're sidestepping the issue.
> But now you’re asserting that the uncomputable must be computable because humans did it.
No, you're here demonstrating you failed to understand the argument.
I'm asserting that you cannot use the fact that humans can do something as proof that humans exceed the Turing computable, because if humans do not exceed the Turing computable said "proof" would still give the same result. As such it does not prove anything.
And proving that humans exceed the Turing computable is a necessary precondition for proving AGI impossible.
> I don’t claim humans are “super-Turing.”
Then your claim to prove AGI can't exist is trivially false. For it to be true, you would need to make that claim, and prove it.
That you don't seem to understand this tells me you don't understand the subject.
(See also my edit above; your proof also contains elmentary failures to understand Turing machines)
You’re misreading what I’m doing, and I suspect you’re also misdefining what a “proof” in this space needs to be.
I’m not assuming humans exceed the Turing computable. I’m not using human behavior as a proof of AGI’s impossibility. I’m doing something much more modest - and much more rigorous.
Here’s the actual chain:
1. There’s a formal boundary for algorithmic systems. It’s called symbolic containment. A system defined by a finite symbol set Σ and rule set R cannot generate a successor frame (Σ′, R′) where Σ′ introduces novel symbols not contained in Σ. This is not philosophy — this is structural containment, and it is provable.
2. Then I observe: in human intellectual history, we find recurring examples of frame expansion. Not optimization, not interpolation — expansion. New primitives. New rules. Special relativity didn’t emerge from Newton through deduction. It required symbols and structures that couldn’t be formed inside the original frame.
3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.
4. This leads to a conclusion: if AGI is an algorithmic system (finite symbols, finite rules, formal inference)then it will not be capable of frame jumps.And it is not incapable of that, because it lacks compute. The system is structurally bounded by what it is.
So your complaint that I “haven’t proven humans exceed Turing” is misplaced. I didn’t claim to. You’re asking me to prove something that I simply don’t need to assert .
I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed). Therefore, if humans are purely algorithmic, something’s missing in our understanding of how those systems operate. And if AGI remains within the current algorithmic paradigm, it will not do X. That’s what I’ve shown.
You can still believe humans are Turing machines, fine for me. But if this belief is to be more than some kind of religious statement, then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅. It is you that would need to show how uncomputable concepts emerge from computable substrates without violating containment (->andthat means: witout violating its own logic - as in formal systems, logic and containment end up as the same thing: Your symbol set defines your expressive space, step outside that, and you’re no longer reasoning — you’re redefining the space, the universe you’re reasoning in).
Otherwise, the limitation stands — and the claim that “AGI can do anything humans do” remains an ungrounded leap of faith.
Also: if you believe the only valid proof of AGI impossibility must rest on metaphysical redefinition of humanity as “super-Turing,” then you’ve set an artificial constraint that ensures no such proof could ever exist, no matter the logic.
That’s intellectually trading epistemic rigor for insulation.
As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.
There's nothing rigorous about this. It's pure crackpottery.
As long as you claim to disprove AGI, it inherently follows that you need to prove that humans exceed the Turing computable to succeed. Since you specifically state that you are not trying to prove that humans exceed the Turing computable, you're demonstrating a fundamental lack of understanding of the problem.
> 3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.
This is only true if humans execeed the Turing computable, as otherwise humans are proof that this is something that an algorithmic system can do. So despite claiming that you're not trying to prove that humans execeed the Turing computable, you are making the claim that humans can.
> I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed).
This is a direct statement that you claim that humans are observed to exceed the Turing computable.
> then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅
This is fundamental to Turing equivalence. If there exist any Turing machine that can generate Σ′, then any Turing machine can generate Σ′.
Anything that is possible with any Turing machine, in fact, is possible with a machine with as few as 2 symbols (the smallest (2,3) Turing machine is usually 2 states and 3 symbols, but per Shannon you can always trade states for symbols, and so a (3,2) Turing machine is also possible). This is because you can always simulate an environment where a larger alphabet is encoded with multiple symbols.
> As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.
This is exactly the part that fails.
Any TM can simulate any other, and that by extension, any TM can be extended to any alphabet through simulation.
If you don't understand this, then you don't understand the very basics of Turing Machines.
“Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?"”
Is that not the other way around? “…how long do I have to stay at high speed in order for my younger brother to look as grown up as myself?”
Staying at high speed is symmetric! You'd both appear to age slower from the other's POV. It's only if one brother turns around and comes back, therefore accelerating, that you get an asymmetry.
Indeed. One of my other thoughts here on the Relativity example was "That sets the bar high given most humans can't figure out special relativity even with all the explainers for Einstein's work".
But I'm so used to AGI being conflated with ASI that it didn't seem worth it compared to the more fundamental errors.
Given rcxdude’s reply it appears I am one of those humans who can’t figure out special relativity (let alone general)
Wrt ‘AGI/ASI’, while they’re not the same, after reading Nick Bostrom (and more recently https://ai-2027.com) I hang towards AGI being a blib on the timeline towards ASI. Who knows.
Humans are fallible in a way computers are not. One could argue any creative process is an exercise in fallibility.
More interestingly, humans are capable of assessing the results of their "neural misfires" ("hmm, there's something to this"), whereas even if we could make a computer do such mistakes, it wouldn't know its Penny Lane from its Daddy's Car[0], even if it managed to come up with one.
[0]https://www.youtube.com/watch?v=LSHZ_b05W7o
Hang on, hasn't everyone spent the past few years complaining about LLMs and diffusion models being very fallible?
And we can get LLMs to do better by just prompting them to "think step by step" or replacing the first ten attempts to output a "stop" symbolic token with the token for "Wait… "?
I think humans have some kind of algorithm for deciding what's true and consolidating information. What that is I don't know.
This paper is about the limits in current systems.
Ai currently has issues with seeing what's missing. Seeing the negative space.
When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.
Basically the ability to say, "this has stopped making sense" and stop or change approach.
Also, we clearly do path exploration and semantic compression in our sleep.
We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).
Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.
I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.
We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.
There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.
Yep definitely agree with this.
I guess so too... but whatever it is: it cannot possibly be something algorithmic. Therefore it doesn't matter in terms of demonstrating that AI has a boundary there, that cannot be transcended by tech, compute, training, data etc.
Explain what you mean by "algorithm" and "algorithmic". Be very precise. You are using this vague word to hinge on your entire argument and it is necessary you explain first what it means. Since from reading your replies here it is clear you are laboring under a defitnition of "algorithm" quite different from the accepted one.
Why can't it be algorithmic? If the brain uses the same process on all information, then that is an algorithmic process. There is some evidence that it does do the same process to do things like consolidating information, processing the "world model" and so on.
Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.
Why can't it be algorithmic?
Why do you think it mustn't be algoritmic?
Why do you think humans are capable of doing anything that isn't algoritmic?
This statement, and your lack of mention of the Church-Turing thesis in your papers suggests you're using a non-standard definition of "algoritmic", and your argument rests on it.
I think the latter fact is quite self-demonstrably true.
I would really like to see your definition of general intelligence and argument for why humans don't fit it.
Colloquially anything that matches humans in general intelligence and is built by us is by definition an agi and generally intelligent.
Humans are the bar for general intelligence.
How so?
Humans use soul juice to connect to the understandome. Machines can't connect to the understandome because of Gödels incompleteness, they can only make relationships between tokens. Not map them to reality like we can via magic.
Stochastic parrots all the ways down
https://ai.vixra.org/pdf/2506.0065v1.pdf
First of all, math isn’t real any more than language isn’t real. It’s an entirely human construct, so it’s possible you cannot reach AGI using mathematical means, as math might not be able to fully express it. It’s similar to how language cannot fully describe what a color is, only vague approximations and measurements. If you wanted to create the color green, you cannot do it by describing various properties, you must create the actual green somehow.
As a somewhat colorblind person, I can tell you that the "actual green" is pretty much a lie :)
It's a deeply philosophical question what constitutes a subjective experience of "green" or whatever... but intelligence is a bit more tractable IHO.
I don't think it would be unfair to accept the brain state of green as an accurate representation of green for all intents and purposes.
Similar to how "computer code" and "video game world" are the same thing. Everything in the video game world is perfectly encoded in the programming. There is nothing transcendent happening, it's two different views of the same core object.
Fair enough. But then, AGI wouldn't really be based on math, but on physics. Why would an artificially-constructed physical system have (fundamentally) different capabilities than a natural one?
My take is that it transcends any science that we'll understand and harness in the lifetime of anyone living today. It for all intents and purposes transcends science from our point of view, but not necessarily in principle.
> are humans not generally intelligent?
Have you not met the average person on the street? (/s)
Noted /s, but truly this is why I think even current models are already more disruptive than naysayers are willing to accept that any future model ever could be.
I'm noting the high frequency of think pieces from said naysayers. It's every day now: they're all furiously writing about flaws and limitations and extrapolating these to unjustifiable conclusions, predicting massive investment failures (inevitable, and irrelevant,) arguing AGI is impossible with no falsifiable evidence, etc.
Humans do a lot of things that computers don't, such as be born, age (verb), die, get hungry, fall in love, reproduce, and more. Computers can only metaphorically do these things, human learning is correlated with all of them, and we don't confidently know how. Have some humility.
TFA presents an information-theoretic argument forAGI being impossible. My reading of your parent commenter is that they are asking why this argument does not also apply to humans.
You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).
The point is that if it's mathematically possible for humans, than it naively would be possible for computers.
All of that just sounds hard, not mathematically impossible.
As I understand it, this is mostly a rehash on the dated Lucas Penrose argument, which most Mind Theory researches refute.
Taking GLP-1 makes me question how much hunger is really me versus my hormones controlling me.
We don’t even know how LLMs work. But we do know the underlying mechanisms are governed by math because we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.
So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.
So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.
What does humility have to do with anything?
> we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.
> So because of this we know reality is governed by maths.
That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.
Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.
> What does humility have to do with anything?
Not the GP but I think humility is kinda relevant here.
>That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.
Let me repharse it. As far as we know all of reality is governed by the principles of logic and therefore math. This is the most likely possibility and we have based all of our technology and culture and science around this. It is the fundamental assumption humanity has made on reality. We cannot consistently demonstrate disproof against this assumption.
>Not the GP but I think humility is kinda relevant here.
How so? If I assume all of reality is governed by math, but you don't. How does that make me not humble but you humble? Seems personal.
I guess it's kinda hubris on my part to question your ability to know things with such high certainty about things that philosophers have been struggling to prove for millenia...
What you said is only true for the bits of humanity you have decided to focus upon -- capitalist, technology-driven modern societies. If you looked beyond that, there are cultures that build society upon other assumptions. You might think those other modes are "wrong", but that's your personal view. For me, I personally don't think any of these are "true" in the absolute sense, as much as I don't think yours is "true". They're just ways humans with our mortal brains try to grapple with a reality that we don't understand.
As a sidenote, probability does not mean the thing you think it means. There's no reasonable frequentist interpretation for fundamental truth of reality, so you're just saying your Bayesian subjective probability says that math is "the most likely possibility". Which is fine, except everyone has their own different priors...
I never made a claim for absolute truth. I said it’s the most likely truth given the fact that you get up every morning and drive a car or turn on your computer and assume everything will work. Because we all assume it, we assume all of logic behind it to be true as well.
Whatever probability is, whatever philosophers say about it any of this it doesn’t matter. You act like all of it is true including the usage of the web technology that allows you to post your idea here. You are acting as if all the logic, science and technology that was involved in the creation of that web technology is real and thus I am simply saying because the entire world claims this assumption by action then my claim is inline with the entire world.
You can make a philosophical argument but your actions aren’t inline with that. You may say no one can prove math or probability to be real but you certainly don’t live your life that way. You don’t think that science logic and technology will suddenly fall apart and not work when oh turn on your computer. In fact you live your life as if those things are fundamentally true. Yet you talk as if they might not be.
> We don’t even know how LLMs work
Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.
We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.
> Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.
If we're OK with descriptions so lossy that they fit in a sentence, we also understand the human brain:
A electrochemical network with external inputs and some feedback loops, pumping ions around to trigger voltage cascades to create muscle contractions as outputs.
Yes. As long as we're confident in our definitions, that makes the questions easy. Is that the same as a feedforward algorithm inferring static weights to create a tokenized response string? Do you necessarily need an electrochemical network with external stimuli and feedback to generate legible text?
No. The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.
> The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.
That "can" should be "could", else it presumes too much.
For both human brains and surprisingly small ANNs, far smaller than LLMs, humanity collectively does not yet know the defining characteristics of the aspects we care about.
I mean, humanity don't agree with itself what any of the three initials of AGI mean, there's 40 definitions of the word "consciousness", there are arguments about if there is either exactly one or many independent G-factors in human IQ scores, and also if those scores mean anything beyond correlating with school grades, and human nerodivergence covers various real states of existance that many of us find incomprehensible (sonetimes mutually, see e.g. most discussions where aphantasia comes up).
The main reason I expect little from an AI is that we don't know what we're doing. The main reason I can't just assume the least is because neither did evolution when we popped out.
The fact that it doesn't operate identically or even similarly on the physical layer doesn't mean that similar processes cannot emerge on higher levels of abstraction.
Pretty sure in most other contexts you wouldn't agree a medieval scribe knows how a fax machine works.
George Hinton the person largely responsible about the AI revolution has this to say:
https://www.reddit.com/r/singularity/comments/1lbbg0x/geoffr...
https://youtu.be/qrvK_KuIeJk?t=284
In that video above George Hinton, directly says we don't understand how it works.
So I don't speak just for myself. I speak for the person who ushered in the AI revolution, I speak for Experts in the field who know what they're talking aboutt. I don't speak for people who don't know what they're talking about.
Even though we know it's a feedforward network and we know how the queries are tokenized you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
Don't try to just argue with me. Argue with the experts. Argue with the people who know more than you, Hinton.
Hinton invented the neural network, which is not the same as the transformer architecture used in LLMs. Asking him about LLM architectures is like asking Henry Ford if he can build a car from a bunch of scrap metal; of course he can't. He might understand the engine or the bodywork, but it's not his job to know the whole process. Nor is it Hinton's.
And that's okay - his humility isn't holding anyone back here. I'm not claiming to have memorized every model weight ever published, either. But saying that we don't know how AI works is empirically false; AI genuinely wouldn't exist if we weren't able to interpret and improve upon the transformer architecture. Your statement here is a dangerous extrapolation.
> you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
You'd think this, but it's actually wrong. If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time. It's a lot of math, but it's wholly possible to compute exactly what AI would say ahead of time if you can solve for the non-deterministic seeded entropy, or remove it entirely.
LLM weights and tokenizer are both always idempotent, the inference software often introduces variability for more varied responses. Just so we're on the same page here.
> If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time.
That answers the "what", but not the "why" nor the "how exactly", with the latter being crucial to any claim that we understand how these things actually work.
If we actually did understand that, we wouldn't need to throw terabytes of data on them to train them - we'd just derive that very equation directly. Or, at the very least, we would know how to do so in principle. But we don't.
> But saying that we don't know how AI works is empirically false;
Your statement completely contradicts hintons statement. You didn’t even address his point. Basically you’re saying Hinton is wrong and you know better than him. If so, counter his argument don’t restate your argument in the form of an analogy.
> You'd think this, but it's actually wrong.
No you’re just trying to twist what I’m saying into something that’s wrong. First I never said it’s not deterministic. All computers are deterministic, even RNGs. I’m saying we have no theory about it. A plane for example you can predict its motion via a theory. The theory allows us to understand and control an airplane and predict its motion. We have nothing for an LLM. No theory that helps us predict, no theory that helps us fully control and no theory that helps us understand it beyond the high level abstraction of a best fit curve in multidimensional space. All we have is an algorithm that allows an LLM to self assemble as a side effect from emergent effects.
Rest assured I understand the transformer as much as you do (which is to say humanity has limited understanding of it) you don’t need to assume I’m just going off hintons statements. He and I knows and understands LLMs as much as you even though we didnt invent it. Please address what I said and what he said with a counter argument and not an analogy that just reiterates an identical point.
>We don’t even know how LLMs work.
Care to elaborate? Because that is utter nonsense.
We understand and build the trellis that the LLMs "grow" on. We don't have good insight into how a fully grown LLM actually turns any specific input into any specific output. We can follow it through the network, but it's a totally senseless noisy mess.
"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.
(This is an illustrative example made for easy understanding, not something I specifically went and compared)
We don't know the path for how a given input produces a given output, but that doesn't mean we don't know how LLMs work.
We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.
We have the Navier–Stokes equations which fit on a matchbox, yet for the last 25 years there's been a US$1,000,000 prize on offer to the first person providing a solution for a specific statement of the problem:
And when that prize is claimed, we'll ring the bell on AGI being found. Gentleman's agreement.
I don't see how it will convince anyone: people said as much before chess, then again about Go, and are still currently disagreeing with each other if LLMs do or don't pass the Turing test.
Irregardless, this was to demonstrate by analogy that things that seem simple can actually be really hard to fully understand.
https://youtu.be/qrvK_KuIeJk?t=284
The above is a video clip of Hinton basically contradicting what you’re saying.
So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.
So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.