> The story that humans have access to some pure deductive engine and LLMs are just faking it with statistics might be flattering to humans more than it’s accurate.
Your point rings true with most human reasoning most of the time. Still, at least some humans do have the capability to run that deductive engine, and it seems to be a key part (though not the only part) of scientific and mathematical reasoning. Even informal experimentation and iteration rest on deductive feedback loops.
The fact that humans can learn to do X, sometimes well, often badly, and while many don’t, strongly supports the conjecture that X is not how they naturally do things.
I can perform symbolic calculations too. But most people have limited versions of this skill, and many people who don’t learn to think symbolically have full lives.
I think it is fair to say humans don’t naturally think in formal or symbolic reasoning terms.
People pattern match,
Another clue is humans have to practice things, become familiar with them to reason even somewhat reliable about them. Even if they already learned some formal reasoning.
—-
Higher level reasoning is always implemented as specific forms of lower order reasoning.
There is confusion about substrate processing vs. what higher order processes can be created with that substrate.
We can “just” be doing pattern matching from an implementation view, and yet go far “beyond” pattern matching with specific compositions of pattern matching, from a capability view.
How else could neurons think? We are “only” neurons. Yet we far surpass the kinds of capabilities neurons have.
I don't disagree with any of that. My comment was only in relation to the question of human-specific capability that current LLMs may not be able to duplicate. I was not making the value judgments you seem to have read.
When people do math or rigorous deductive reasoning, are we sure they aren't just pattern matching with a set of carefully chosen interacting patterns that have been refined by ancient philosophers as being useful patterns that produce consistent results when applied in correctly patterned ways?
I've often wondered this. I suspect not, though I don't know. You're right that the answer matters to understanding LLM limitations relative to humans, though.