> and we don't need to know how exactly human thinking works to acknowledge that.
Until you know how thinking works in humans, you can't say something else is different. We've got the same inputs available that we can provide to AI models. Saying we don't form our thinking based on statistics on those inputs and the state of the brain is a massive claim on its own.
> Until you know how thinking works in humans, you can't say something else is different.
Yes, I very much can, because I can observe outcomes. Humans are a) alot more capable than language models, and b) humans do not rely solely on the statistical relationships of language tokens.
How can I show that? Easily in fact: Language tokens require organized language.
And our evolutionary closest relatives (big apes) don't rely on organized speech, and they are able of advanced cognition (planning, episodic memory, theory of the mind, theory of self, ...). The same is true for other living beings, even vertebrates that are not closely related with us, like Corvidae, and even some invertebrates like Cephalopods.
So unless you can show that our brains are somehow more closely related to silicon-based integrated circuits than they are to those of a Gorilla, Raven or Octopus, my point stands.
> Humans are a) alot more capable than language models
That's a scale of capability, not architecture difference. A human kid is less capable than an adult, but you wouldn't classify them as thinking using different mechanisms.
> b) humans do not rely solely on the statistical relationships of language tokens. (...) Language tokens require organized language.
That's just how you provide data. Multimodal models can accept whole vectors describing images, sounds, smells, or whatever else - all of them can be processed and none of them are organised language.
> that our brains are somehow more closely related to silicon-based integrated circuits than they are to those of a Gorilla
That's entirely different from a question about functional equivalence and limit of capabilities.