Humans are notoriously bad at formal logic. The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters. That looks a lot more like pattern matching than rule application.

Kahneman’s whole framework points the same direction. Most of what people call “reasoning” is fast, associative, pattern-based. The slow, deliberate, step-by-step stuff is effortful and error-prone, and people avoid it when they can. And even when they do engage it, they’re often confabulating a logical-sounding justification for a conclusion they already reached by other means.

So maybe the honest answer is: the gap between what LLMs do and what most humans do most of the time might be smaller than people assume. The story that humans have access to some pure deductive engine and LLMs are just faking it with statistics might be flattering to humans more than it’s accurate.

Where I’d still flag a possible difference is something like adaptability. A person can learn a totally new formal system and start applying its rules, even if clumsily. Whether LLMs can genuinely do that outside their training distribution or just interpolate convincingly is still an open question. But then again, how often do humans actually reason outside their own “training distribution”? Most human insight happens within well-practiced domains.

> The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters.

I've never heard about the Wason selection task, looked it up, and could tell the right answer right away. But I can also tell you why: because I have some familiarity with formal logic and can, in your words, pattern-match the gotcha that "if x then y" is distinct from "if not x then not y".

In contrast to you, this doesn't make me believe that people are bad at logic or don't really think. It tells me that people are unfamiliar with "gotcha" formalities introduced by logicians that don't match the everyday use of language. If you added a simple additional to the problem, such as "Note that in this context, 'if' only means that...", most people would almost certainly answer it correctly.

Mind you, I'm not arguing that human thinking is necessarily more profound from what what LLMs could ever do. However, judging from the output, LLMs have a tenuous grasp on reality, so I don't think that reductionist arguments along the lines of "humans are just as dumb" are fair. There's a difference that we don't really know how to overcome.

Quoting the Wikipedia article's formulation of the task for clarity:

> You are shown a set of four cards placed on a table, each of which has a number on one side and a color on the other. The visible faces of the cards show 3, 8, blue and red. Which card(s) must you turn over in order to test that if a card shows an even number on one face, then its opposite face is blue?

Confusion over the meaning of 'if' can only explain why people select the Blue card; it can't explain why people fail to select the Red card. If 'if' meant 'if and only if', then it would still be necessary to check that the Red card didn't have an even number. But according to Wason[0], "only a minority" of participants select (the study's equivalent of) the Red card.

[0] https://web.mit.edu/curhan/www/docs/Articles/biases/20_Quart...

People in everyday life are not evaluating rules. They evaluate cases, for whether a case fits a rule.

So, when being told:

"Which card(s) must you turn over in order to test that if a card shows an even number on one face, then its opposite face is blue?"

they translate it to:

"Check the cards that show an even number on one face to see whether their opposite face is blue and vice versa"

Based on this, many would naturally pick the blue card (to test the direct case), and the 8 card (to test the "vice versa" case).

They wont check the red to see if there's an odd number there that invalidates the formulation as a general rule, because they're not in the mindset of testing a general rule.

Would they do the same if they had more familiarity with rule validation in everyday life or if the had a more verbose and explicit explanation of the goal?

Yeah maybe if you phrased it as "Which card(s) must you turn over in order to ensure that all odd-numbered cards are blue?" you'd get a better response?

Exactly. We invented rule-based machines so that we could have a thing that follows rules, and adheres strictly to them, all day long.

Im not sure why people keep comparing machine-behaviour to human's. Its like Economic models that assume perfect rationality... yeah that's not reality mate.

I've confidently picked 8+blue and is now trying to understand why I personally did that. I think that maybe the text of the puzzle is not quite unambiguous. The question states "test a card" followed by "which cards", so this is what my brain immediately starts to check - every card one by one. Do I need to test "3"? No, not even. Do I need to test "8"? yes. Do I need to test "blue"? Yes, because I need to test "a card" to fit the criteria. And lastly "red" card also immediately fails verification of a "a card" fitting that criteria.

I think a corrected question should clarify in any obvious way that we are verifying not "a card" but "a rule" applicable to all cards. So a needs to be replaced with all or any, and mention of rule or pattern needs to be added.

It also doesn't explain why people don't think it necessary to check the 3 to make sure it's not blue (which it would be if "if" meant "if and only if").

I think we're actually closer to agreement than it might seem.

You're right that the Wason task is partly about a mismatch between how "if" works in formal logic and how it works in everyday language. That's a fair point. But I think it actually supports what I'm saying rather than undermining it. If people default to interpreting "if x then y" as "if and only if" based on how language normally works in conversation, that is pattern-matching from familiar context. It's a totally understandable thing to do, and I'm not calling it a cognitive defect. I'm saying it's evidence that our default mode is contextual pattern-matching, not rule application. We agree on the mechanism, we're just drawing different conclusions from it.

Your own experience is interesting too. You got the right answer because you have some background in formal logic. That's exactly what I'd expect. Someone who's practiced in a domain recognizes the pattern quickly. But that's the claim: most reasoning happens within well-practiced domains. Your success on the task doesn't counter the pattern-matching thesis, it's a clean example of it working well.

On the broader point about LLMs having a "tenuous grasp on reality," I hear that, and I don't want to flatten the differences. There probably is something meaningfully different going on with how humans stay grounded. I just think the "humans reason, LLMs pattern-match" framing undersells how much human cognition is also pattern-matching, and that being honest about that is more productive than treating it as a reductionist insult.

Agree with much of your comment.

Though note that as GP said, on the Wason selection task, people famously do much better when it's framed in a social context. That at least partially undermines your theory that its lack of familiarity with the terminology of formal logic.

Maybe the social version just creates a context where "if x then y" obviously does not include "if not x then not y". Everyone knows people over the drinking age can drink both alcoholic and non-alcoholic drinks, so you obviously don't have to check the person drinking the soft drink to make sure they aren't an adult.

[deleted]

I for the life of me could not solve the <18 example from wikipedia. but the number/color one is super easy

As they say, "think about how smart the average person is, then realize half the population is below that". There are far more haikus than opuses walking this planet.

We keep benchmarking models against the best humans and the best human institutions - then when someone points out that swarms, branching, or scale could close the gap, we dismiss it as "cheating". But that framing smuggles in an assumption that intelligence only counts if it works the way ours does. Nobody calls a calculator a cheat for not understanding multiplication - it just multiplies better than you, and that's what matters.

LLMs are a different shape of intelligence. Superhuman on some axes, subpar on others. The interesting question isn't "can they replicate every aspect of human cognition" - it's whether the axes they're strong on are sufficient to produce better than human outcomes in domains that matter. Calculators settled that question for arithmetic. LLMs are settling it for an increasingly wide range of cognitive work. The fact that neither can flip a burger is irrelevant.

Humans don't have a monopoly on intelligence. We just had a monopoly on generality and that moat is shrinking fast.

The "God of the gaps" theory is a theological and philosophical viewpoint where gaps in scientific knowledge are cited as evidence for the existence and direct intervention of a divine creator. It asserts that phenomena currently unexplained by science—such as the origin of life or consciousness—are caused by God.

We are doing inversion of God of gaps to "LLM of Gaps" where gaps in LLM capabilities are considered inherently negative and limiting

It is not actually the gaps in capability, and instead it arises from an understanding of how it works and an honest acknowledgement of how far it could go.

The question is not if these things are actually intelligent or not. The question is if these things will be useful without an endless supply of training data and continuous re-alignment using it..

And the questions "Are these things really intelligent" is just a proxy for that.

And we are interested in that question because that is necessary to justify the massive investment these things are getting now. It is quite easy to look at these things and conclude that it will continue to progress without any limit.

But that would be like looking at data compression at the time of its conception, and thinking that it is only a matter of time we can compress 100GB into 1KB..

We live in a time of scams that are obvious if you take a second look. If something that require much deeper scrutiny, then it is possible to generate a lot more larger bubble.

> and that moat is shrinking fast..

The point is that in reality it is not. It is just appearance. If you consider how these things work, then there is no justification of this conclusion.

I have said this elsewhere, but the problem of Hallucination itself along with the requirement of re-training, the smoking gun that these things are not intelligence in ways that would justify these massive investments.

> If you added a simple additional to the problem, such as "Note that in this context, 'if' only means that...", most people would almost certainly answer it correctly.

Agreed. More broadly, classical logic isn't the only logic out there. Many logics will differ on the meaning of implication if x then y. There's multiple ways for x to imply y, and those additional meanings do show up in natural language all the time, and we actually do have logical systems to describe them, they are just lesser known.

Mapping natural language into logic often requires a context that lies outside the words that were written or spoken. We need to represent into formulas what people actually meant, rather than just what they wrote. Indeed the same sentence can be sometimes ambiguous, and a logical formula never is.

As an aside, I wanna say that material implication (that is, the "if x then y" of classical logic) deeply sucks, or rather, an implication in natural language very rarely maps cleanly into material implication. Having an implication if x then y being vacuously true when x is false is something usually associated with people that smirk on clever wordplays, rather than something people actually mean when they say "if x then y"

Your response contains a performative contradiction: you are asserting that humans are naturally logical while simultaneously committing several logical errors to defend that claim.

This comment would be a lot more useful with an enumeration of those logical errors.

commenter’s specific claim—that adding a note about the definition of "if" would solve the problem—is a moving the goalposts fallacy and a tautology. The comment also suffers from hasty generalization (in their experience the test isn't hard) and special pleading (double standard for LLM and humans).

When someone tells you "you can have this if you pay me", they don't mean "you can also have it if you don't pay". They are implicitly but clearly indicating you gotta pay.

It's as simple as that. In common use, "if x then y" frequently implies "if not x then not y". Pretending that it's some sort of a cognitive defect to interpret it this way is silly.

In the original studies, most people made an error that can't be explained by that misunderstanding: they failed to select the card showing 'not y'.

From my armchair this feels relevant:

> Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., “not bad” represented as “good”)[...]

From: Negation mitigates rather than inverts the neural representations of adjectives

At: https://journals.plos.org/plosbiology/article?id=10.1371/jou...

> But then again, how often do humans actually reason outside their own “training distribution”? Most human insight happens within well-practiced domains.

Humans can produce new concepts and then symbolize them for communication purposes. The meaning of concepts is grounded in operational definitions - in a manner that anyone can understand because they are operational, and can be reproduced in theory by anyone.

For example, euclid invented the concepts of a point, angle and line to operationally represent geometry in the real world. These concepts were never "there" to begin with. They were created from scratch to "build" a world-model that helps humans navigate the real world.

Euclid went outside his "training distribution" to invent point, angle, and line. Humans have this ability to construct new concepts by interaction with the real world - bringing the "unknown" into the "known" so-to-speak. Animals have this too via evolution, but it is unclear if animals can symbolize their concepts and skills to the extent that humans can.

> Humans can produce new concepts and then symbolize them for communication purposes.

Sure, but the question is how often this actually happens versus how often people are doing something closer to recombination and pattern-matching within familiar territory. The point was about the base rate of genuine novel reasoning in everyday human cognition, and I don't think this addresses that.

> Euclid invented the concepts of a point, angle and line to operationally represent geometry in the real world. These concepts were never "there" to begin with.

This isn't really true though. Egyptian and Babylonian surveyors were working with geometric concepts long before Euclid. What Euclid did was axiomatize and systematize knowledge that was already in wide practical use. That's a real achievement, but it's closer to "sophisticated refinement within a well-practiced domain" than to reasoning from scratch outside a training distribution. If anything the example supports the parent comment.

There's also something off about saying points and lines were "never there." Humans have spatial perception. Geometric intuitions come from embodied experience of edges, boundaries, trajectories. Formalizing those intuitions is real work, but it's not the same as generating something with no prior basis.

The deeper issue is you're pointing to one of the most extraordinary intellectual achievements in human history and treating it as representative of human cognition generally. The whole point, drawing on Kahneman, is that most of what we call reasoning is fast associative pattern-matching, and that the slow deliberate stuff is rarer and more error-prone than people assume. The fact that Euclid existed doesn't tell us much about what the other billions of humans are doing cognitively on a Tuesday afternoon.

> Formalizing those intuitions is real work, but it's not the same as generating something with no prior basis.

> The fact that Euclid existed doesn't tell us much about what the other billions of humans are doing cognitively on a Tuesday afternoon.

Birds can fly - so, there is some flying intelligence built into their dna. But, are they aware of their skill to be able to create a theory of flight, and then use that to build a plane ? I am just pointing out that intuitions are not enough - the awareness of the intuitions in a manner that can symbolize and operationalize it is important.

> The whole point, drawing on Kahneman, is that most of what we call reasoning is fast associative pattern-matching, and that the slow deliberate stuff is rarer and more error-prone than people assume

David Bessis, in his wonderful book [1] argues that the cognitive actions done by you and I on a tuesday afternoon is the same that mathematicians do - just that we are unaware of it. Also, since you brought up Kahneman, Bessis proposes a System 3 wherein inaccurate intuitions is corrected by precise communication.

[1] Mathematica: A Secret World of Intuition and Curiosity

The bird analogy is actually a really good one, but I think it supports a narrower claim than you're making. You're right that the capacity to symbolize and formalize intuitions is a distinct and important thing, separate from just having the intuitions. No argument there. But my point wasn't that symbolization doesn't matter. It was about how often humans actually exercise that capacity in a strong sense versus doing something more like recombination within familiar frameworks. The bird can't theorize flight, agreed. But most humans who can in principle theorize about their intuitions also don't, most of the time. The capacity exists. The base rate of its deployment is the question.

On Bessis, I actually think his argument is more compatible with what I was saying than it might seem. If the cognitive process underlying mathematical reasoning is the same one operating on a Tuesday afternoon, that's an argument against treating Euclid-level formalization as categorically different from everyday cognition. It suggests a continuum rather than a bright line between "pattern matching" and "genuine reasoning." Which is interesting and probably right. But it also means you can't point to Euclid as evidence that humans routinely do something qualitatively beyond what LLMs do. If Bessis is right, then the extraordinary cases and the mundane cases share the same underlying machinery, and the question becomes quantitative (how far along the continuum, how often, under what conditions) rather than categorical.

I'll check out the book though, it sounds like it's making a more careful version of the point than usually gets made in these threads.

> Kahneman’s whole framework points the same direction. Most of what people call “reasoning” is fast, associative, pattern-based. The slow, deliberate, step-by-step stuff is effortful and error-prone, and people avoid it when they can. And even when they do engage it, they’re often confabulating a logical-sounding justification for a conclusion they already reached by other means.

Some references on that

https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow

https://thedecisionlab.com/reference-guide/philosophy/system...

System 1 really looks like a LLM (indeed completing a phrase is an example of what it can do, like, "you either die a hero, or you live enough to become the _"). It's largely unconscious and runs all the time, pattern matching on random stuff

System 2 is something else and looks like a supervisor system, a higher level stuff that can be consciously directed through your own will

But the two systems run at the same time and reinforce each other

In my naive understanding, neither requires any will or consciousness.

S1 is “bare” language production, picking words or concepts to say or think by a fancy pattern prediction. There’s no reasoning at this level, just blabbering. However, language by itself weeds out too obvious nonsense purely statistically (some concepts are rarely in the same room), but we may call that “mindlessly” - that’s why even early LLMs produced semi-meaningful texts.

S2 is a set of patterns inside the language (“logic”), that biases S1 to produce reasoning-like phrases. Doesn’t require any consciousness or will, just concepts pushing S1 towards a special structure, simply backing one keeps them “in mind” and throws in the mix.

I suspect S2 has a spectrum of rigorousness, because one can just throw in some rules (like “if X then Y, not Y therefore not X”) or may do fancier stuff (imposing a larger structure to it all, like formulating and testing a null hypothesis). Either way it all falls down onto S1 for a ultimate decision-making, a sense of what sounds right (allowing us our favorite logical flaws), thus the fancier the rules (patterns of “thought”) the more likely reasoning will be sounder.

S2 doesn’t just rely but is a part of S1-as-language, though, because it’s a phenomena born out (and inside) the language.

Whether it’s willfully “consciously” engaged or if it works just because S1 predicts logical thinking concept as appropriate for certain lines of thinking and starts to involve probably doesn’t even matter - it mainly depends on whatever definition of “will” we would like to pick (there are many).

LLMs and humans can hypothetically do both just fine, but when it comes to checking, humans currently excel because (I suspect) they have a “wider” language in S1, that doesn’t only include word-concepts but also sensory concepts (like visuospatial thinking). Thus, as I get it, the world models idea.

I remember reading about this in a book, 'The enigma of reason', basically it was saying that reasoning was exactly that, we decided and then we came up with a reason for what we had decided and usually not the other way around.

This is because, the 'reasoning' part of our brain came from evolution when we started to communicate with others, we needed to explain our behaviour.

Which is fascinating if you think of the implications of that. In the most part we think we are being logical, but in reality we are pattern matching/impulsive and using our reasoning/logic to come up for excuses for why we have chosen what we had already decided.

It explains a lot about the world and why it's so hard to reason with someone, we are assuming the decision came from reason in the first place, which when you look at such peoples choices, makes sense as it's clear it didn't.

> The story that humans have access to some pure deductive engine and LLMs are just faking it with statistics might be flattering to humans more than it’s accurate.

Your point rings true with most human reasoning most of the time. Still, at least some humans do have the capability to run that deductive engine, and it seems to be a key part (though not the only part) of scientific and mathematical reasoning. Even informal experimentation and iteration rest on deductive feedback loops.

The fact that humans can learn to do X, sometimes well, often badly, and while many don’t, strongly supports the conjecture that X is not how they naturally do things.

I can perform symbolic calculations too. But most people have limited versions of this skill, and many people who don’t learn to think symbolically have full lives.

I think it is fair to say humans don’t naturally think in formal or symbolic reasoning terms.

People pattern match,

Another clue is humans have to practice things, become familiar with them to reason even somewhat reliable about them. Even if they already learned some formal reasoning.

—-

Higher level reasoning is always implemented as specific forms of lower order reasoning.

There is confusion about substrate processing vs. what higher order processes can be created with that substrate.

We can “just” be doing pattern matching from an implementation view, and yet go far “beyond” pattern matching with specific compositions of pattern matching, from a capability view.

How else could neurons think? We are “only” neurons. Yet we far surpass the kinds of capabilities neurons have.

I don't disagree with any of that. My comment was only in relation to the question of human-specific capability that current LLMs may not be able to duplicate. I was not making the value judgments you seem to have read.

When people do math or rigorous deductive reasoning, are we sure they aren't just pattern matching with a set of carefully chosen interacting patterns that have been refined by ancient philosophers as being useful patterns that produce consistent results when applied in correctly patterned ways?

I've often wondered this. I suspect not, though I don't know. You're right that the answer matters to understanding LLM limitations relative to humans, though.

Brilliant insight. The success of LLM reasoning, ie “telling yourself a story”, has greatly increased my belief that humans are actually much less impressive than they seem. I do think it’s mostly pattern matching and a bunch of interacting streams analogous to LLM tokens. Obviously the implementations are different, because nature has to be robust and learn online, but I do not think we are as different from these machines as most people assume. There’s a reason Hofstadter et al. reacted as they did even to the earlier models.

This is why I also think humans being logical inference machines is mostly not true. We are seemingly capable of it, but there must be some cost that keeps it from being commonly used.

While humans did seemingly evolve socially very fast, with the tools we seem to have had for a few hundred thousand years it could have been far faster if there were not some other limitations that are being applied.