Humans do a lot of things that computers don't, such as be born, age (verb), die, get hungry, fall in love, reproduce, and more. Computers can only metaphorically do these things, human learning is correlated with all of them, and we don't confidently know how. Have some humility.

TFA presents an information-theoretic argument forAGI being impossible. My reading of your parent commenter is that they are asking why this argument does not also apply to humans.

You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).

The point is that if it's mathematically possible for humans, than it naively would be possible for computers.

All of that just sounds hard, not mathematically impossible.

As I understand it, this is mostly a rehash on the dated Lucas Penrose argument, which most Mind Theory researches refute.

Taking GLP-1 makes me question how much hunger is really me versus my hormones controlling me.

We don’t even know how LLMs work. But we do know the underlying mechanisms are governed by math because we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.

So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.

So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.

What does humility have to do with anything?

> we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.

> So because of this we know reality is governed by maths.

That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.

Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.

> What does humility have to do with anything?

Not the GP but I think humility is kinda relevant here.

>That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.

Let me repharse it. As far as we know all of reality is governed by the principles of logic and therefore math. This is the most likely possibility and we have based all of our technology and culture and science around this. It is the fundamental assumption humanity has made on reality. We cannot consistently demonstrate disproof against this assumption.

>Not the GP but I think humility is kinda relevant here.

How so? If I assume all of reality is governed by math, but you don't. How does that make me not humble but you humble? Seems personal.

I guess it's kinda hubris on my part to question your ability to know things with such high certainty about things that philosophers have been struggling to prove for millenia...

What you said is only true for the bits of humanity you have decided to focus upon -- capitalist, technology-driven modern societies. If you looked beyond that, there are cultures that build society upon other assumptions. You might think those other modes are "wrong", but that's your personal view. For me, I personally don't think any of these are "true" in the absolute sense, as much as I don't think yours is "true". They're just ways humans with our mortal brains try to grapple with a reality that we don't understand.

As a sidenote, probability does not mean the thing you think it means. There's no reasonable frequentist interpretation for fundamental truth of reality, so you're just saying your Bayesian subjective probability says that math is "the most likely possibility". Which is fine, except everyone has their own different priors...

I never made a claim for absolute truth. I said it’s the most likely truth given the fact that you get up every morning and drive a car or turn on your computer and assume everything will work. Because we all assume it, we assume all of logic behind it to be true as well.

Whatever probability is, whatever philosophers say about it any of this it doesn’t matter. You act like all of it is true including the usage of the web technology that allows you to post your idea here. You are acting as if all the logic, science and technology that was involved in the creation of that web technology is real and thus I am simply saying because the entire world claims this assumption by action then my claim is inline with the entire world.

You can make a philosophical argument but your actions aren’t inline with that. You may say no one can prove math or probability to be real but you certainly don’t live your life that way. You don’t think that science logic and technology will suddenly fall apart and not work when oh turn on your computer. In fact you live your life as if those things are fundamentally true. Yet you talk as if they might not be.

> We don’t even know how LLMs work

Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.

We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.

> Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.

If we're OK with descriptions so lossy that they fit in a sentence, we also understand the human brain:

A electrochemical network with external inputs and some feedback loops, pumping ions around to trigger voltage cascades to create muscle contractions as outputs.

Yes. As long as we're confident in our definitions, that makes the questions easy. Is that the same as a feedforward algorithm inferring static weights to create a tokenized response string? Do you necessarily need an electrochemical network with external stimuli and feedback to generate legible text?

No. The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.

> The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.

That "can" should be "could", else it presumes too much.

For both human brains and surprisingly small ANNs, far smaller than LLMs, humanity collectively does not yet know the defining characteristics of the aspects we care about.

I mean, humanity don't agree with itself what any of the three initials of AGI mean, there's 40 definitions of the word "consciousness", there are arguments about if there is either exactly one or many independent G-factors in human IQ scores, and also if those scores mean anything beyond correlating with school grades, and human nerodivergence covers various real states of existance that many of us find incomprehensible (sonetimes mutually, see e.g. most discussions where aphantasia comes up).

The main reason I expect little from an AI is that we don't know what we're doing. The main reason I can't just assume the least is because neither did evolution when we popped out.

The fact that it doesn't operate identically or even similarly on the physical layer doesn't mean that similar processes cannot emerge on higher levels of abstraction.

Pretty sure in most other contexts you wouldn't agree a medieval scribe knows how a fax machine works.

George Hinton the person largely responsible about the AI revolution has this to say:

https://www.reddit.com/r/singularity/comments/1lbbg0x/geoffr...

https://youtu.be/qrvK_KuIeJk?t=284

In that video above George Hinton, directly says we don't understand how it works.

So I don't speak just for myself. I speak for the person who ushered in the AI revolution, I speak for Experts in the field who know what they're talking aboutt. I don't speak for people who don't know what they're talking about.

Even though we know it's a feedforward network and we know how the queries are tokenized you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.

Don't try to just argue with me. Argue with the experts. Argue with the people who know more than you, Hinton.

Hinton invented the neural network, which is not the same as the transformer architecture used in LLMs. Asking him about LLM architectures is like asking Henry Ford if he can build a car from a bunch of scrap metal; of course he can't. He might understand the engine or the bodywork, but it's not his job to know the whole process. Nor is it Hinton's.

And that's okay - his humility isn't holding anyone back here. I'm not claiming to have memorized every model weight ever published, either. But saying that we don't know how AI works is empirically false; AI genuinely wouldn't exist if we weren't able to interpret and improve upon the transformer architecture. Your statement here is a dangerous extrapolation.

> you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.

You'd think this, but it's actually wrong. If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time. It's a lot of math, but it's wholly possible to compute exactly what AI would say ahead of time if you can solve for the non-deterministic seeded entropy, or remove it entirely.

LLM weights and tokenizer are both always idempotent, the inference software often introduces variability for more varied responses. Just so we're on the same page here.

> If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time.

That answers the "what", but not the "why" nor the "how exactly", with the latter being crucial to any claim that we understand how these things actually work.

If we actually did understand that, we wouldn't need to throw terabytes of data on them to train them - we'd just derive that very equation directly. Or, at the very least, we would know how to do so in principle. But we don't.

> But saying that we don't know how AI works is empirically false;

Your statement completely contradicts hintons statement. You didn’t even address his point. Basically you’re saying Hinton is wrong and you know better than him. If so, counter his argument don’t restate your argument in the form of an analogy.

> You'd think this, but it's actually wrong.

No you’re just trying to twist what I’m saying into something that’s wrong. First I never said it’s not deterministic. All computers are deterministic, even RNGs. I’m saying we have no theory about it. A plane for example you can predict its motion via a theory. The theory allows us to understand and control an airplane and predict its motion. We have nothing for an LLM. No theory that helps us predict, no theory that helps us fully control and no theory that helps us understand it beyond the high level abstraction of a best fit curve in multidimensional space. All we have is an algorithm that allows an LLM to self assemble as a side effect from emergent effects.

Rest assured I understand the transformer as much as you do (which is to say humanity has limited understanding of it) you don’t need to assume I’m just going off hintons statements. He and I knows and understands LLMs as much as you even though we didnt invent it. Please address what I said and what he said with a counter argument and not an analogy that just reiterates an identical point.

>We don’t even know how LLMs work.

Care to elaborate? Because that is utter nonsense.

We understand and build the trellis that the LLMs "grow" on. We don't have good insight into how a fully grown LLM actually turns any specific input into any specific output. We can follow it through the network, but it's a totally senseless noisy mess.

"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.

(This is an illustrative example made for easy understanding, not something I specifically went and compared)

We don't know the path for how a given input produces a given output, but that doesn't mean we don't know how LLMs work.

We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.

We have the Navier–Stokes equations which fit on a matchbox, yet for the last 25 years there's been a US$1,000,000 prize on offer to the first person providing a solution for a specific statement of the problem:

  Prove or give a counter-example of the following statement:

  In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.

And when that prize is claimed, we'll ring the bell on AGI being found. Gentleman's agreement.

I don't see how it will convince anyone: people said as much before chess, then again about Go, and are still currently disagreeing with each other if LLMs do or don't pass the Turing test.

Irregardless, this was to demonstrate by analogy that things that seem simple can actually be really hard to fully understand.

https://youtu.be/qrvK_KuIeJk?t=284

The above is a video clip of Hinton basically contradicting what you’re saying.

So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.

So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.