We're conscious animals who communicate because we navigate social spaces, not because we're completing the next token. I wonder about hackers who think they're nothing more than the latest tech.

You postulate it as if these two are mutually exclusive, but it's not at all clear why we can't be "completing the next token" to communicate in order to navigate social spaces. This last part is just where our "training" (as species) comes from, it doesn't really say anything about the mechanism.

Because what's motivating our language is a variety of needs, emotions and experiences as social animals. As such we have goals and desires. We're not sitting there waiting to be prompted for some output.

You constantly have input from all your senses, which is effectively your "prompt". If you stick a human into a sensory deprivation tank for long enough, very weird things happen.

How do you know we’re not just completing the next token?

It seems eminently plausible that the way cognition works is to take in current context and select the most appropriate next action/token. In fact, it’s hard to think of a form of cognition that isnt “given past/context, predict next thing”

Philosophers have been arguing a parallel point for centuries. Does intelligence require some sort of (ostensibly human-ish) qualia or does “if it quacks like a duck, it is a duck” apply?

I think it's better to look at large language models in the context of Wittgenstein. Humans are more than next token predictors because we participate in “language games” through which we experimentally build up a mental model for what each word means. LLMs learn to “rule follow” via a huge corpus of human text but there’s no actual intelligence there (in a Wittgensteinian analysis) because there’s no “participation” beyond RLHF (in which humans are playing the language games for the machine). There’s a lot to unpack there but that’s the gist of my opinion.

Until we get some rigorous definitions for intelligence or at least break it up into many different facets, I think pie in the sky philosophy is the best we can work with.

Trivially, because any two of rarely produce the same "next token"!

An ensemble of LLMs trained identically would generate the same next token(s) forever. But we don't - we generate different sequences.

We are not LLMs.

If you ignore everything that makes us human to make some sort of analogy between brain activity and LLMs. Let us not forget they are tools we made to serve our goals.

> How do you know we’re not just completing the next token

Because we (humans) weren't born into a world with computers, internet, airplanes, satellites, etc

"Complete next token" means that everything is already in the data set. It can remix things in interesting ways, sure. But that isn't the same as creating something new

Edit: I would love to hear someone's idea about how you could "parrot" your way into landing people on the moon without any novel discovery or invention

Everything is made out of just Protons, Neutrons, Electrons, along with some fields that allow interaction. (and Muons, Neutrinos, and a few others)

Everything that is physical is nothing but remixes and recombinations of a very small set of tokens.

> Everything that is physical is nothing but remixes and recombinations of a very small set of tokens.

We're not talking about "physical" with LLMs, we're talking about knowledge and creativity and reasoning, which are metaphysical.

The sum total of human knowledge cannot possibly be purely composed of remixes and recombinations, there has to be some baseline that humans invented for there to even be something to remix!

All of that is rooted in physics though.

Lnowledge and creativity absolutely are physical things. It's clear from brain injury studies that there are very localized and specific functions to this creativity.

Drugs also clearly have a very physical affect on these attributes.

You're conflating symbolic descriptions for the physical stuff itself.

You're right to flag the distinction between symbols and substance, but I think you're misapplying it here.

I'm not conflating symbolic systems with the physical substrate: they're obviously different levels of abstraction. What I am saying is that symbolic reasoning, language, creativity, and knowledge all emerge from the same underlying physical processes. They're not magic. They're not floating in some Platonic realm. They’re instantiated in real, measurable patterns, whether in neurons or silicon.

You can't have metaphysics without physics. And we have solid evidence, from neuroscience, from pharmacology, from evolutionary biology, that the brain's symbolic output is fundamentally a physical phenomenon. Injuries, chemicals, electrical stimulation, they all modulate “metaphysical” experience in completely physical ways.

Emergence matters here. Yes, atoms aren’t thoughts, but enough atoms arranged the right way do start behaving like a thinking system. That’s the whole point of complex systems theory, chaos theory, and even early AI work like Hofstadter and Dennett. I recommend "Gödel, Escher, Bach", or Melanie Mitchell's "Complexity: A Guided Tour", if you're curious.

If you're arguing there's something else, some kind of unphysical or non-emergent component to knowledge or creativity, I'd honestly love to hear more, because that's a bold claim. But waving away the physical substrate as irrelevant doesn’t hold up under scrutiny.

Everyone's computing the next token. Intelligence is computing the right token.

Until we create the next thing, then intelligence will be compared to that. Anyway, I don't think neuroscientists are making this claim.

What is the "right" token? How do you identify it?

Best to not assume humans are LLMs.