Humans who have heard of Monty Hall might also say you should always switch without noticing that the situation is different. That's not evidence that they can't think, just that they're fallible.

People on here always assert LLMs don't "really" think or don't "really" know without defining what all that even means, and to me it's getting pretty old. It feels like an escape hatch so we don't feel like our human special sauce is threatened, a bit like how people felt threatened by heliocentrism or evolution.

> Humans who have heard of Monty Hall might also say you should always switch without noticing that the situation is different. That's not evidence that they can't think, just that they're fallible.

At some point we start playing a semantics game over the meaning of "thinking", right? Because if a human makes this mistake because they jumped to an already-known answer without noticing a changed detail, it's because (in the usage of the person you're replying to), the human is pattern matching, instead of thinking. I don't think is surprising. In fact I think much of what passes for thinking in casual conversation is really just applying heuristics we've trained in our own brains to give us the correct answer without having to think rigorously. We remember mental shortcuts.

On the other hand, I don't think it's controversial that (some) people are capable of performing the rigorous analysis of the problem needed to give a correct answer in cases like this fake Monty Hall problem. And that's key... if you provide slightly more information and call out the changed nature of the problem to the LLM, it may give you the correct response, but it can't do the sort of reasoning that would reliably give you the correct answer the way a human can. I think that's why the GP doesn't want to call it "thinking" - they want to reserve that for a particular type of reflective process that can rigorously perform logical reasoning in a consistently valid way.

I'm not sure what your argument is. The common claim that annoys me about LLMs on here is that they're not "really" coming up with ideas but that they're cheating and just repeating something they read on the internet somewhere that was written by a human who can "really" think. To me this is obviously false if you've talked to a SOTA LLM or know a little about how they work.

On the other hand, computers are suppose to be both accurate and able to reproduce said accuracy.

The failure of an LLM to reason this out is indicative that really, it isn’t reasoning at all. It’s a subtle but welcome reminder that it’s pattern matching

Computers might be accurate but statistical models never were 100% accurate. That doesn't imply that no reasoning is happening. Humans get stuff wrong too but they certainly think and reason.

"Pattern matching" to me is another one of those vague terms like "thinking" and "knowing" that people decide LLMs do or don't do based on vibes.

Pattern matching has a definition in this field, it does mean specific things. We know machine learning has excelled at this in greater and greater capacities over the last decade

The other part of this is weighted filtering given a set of rules, which is a simple analogy to how AlphaGo did its thing.

Dismissing all this as vague is effectively doing the same thing as you are saying others do.

This technology has limits and despite what Altman says, we do know this, and we are exploring them, but it’s within its own confines. They’re fundamentally wholly understandable systems that work on a consistent level in terms of the how they do what they do (that is separate from the actual produced output)

I think reasoning, as any layman would use the term, is not accurate to what these systems do.

You're derailing the conversation. The discussion was about thinking, and now you're arguing about something entirely different and didn't even mention the word “think” a single time.

If you genuinely believe that anyone knows how LLMs work, how brains work, and/or how or why the latter does “thinking” while the former does not, you're just simply wrong. AI researchers fully acknowledge ignorance in this matter.

> Pattern matching has a definition in this field, it does mean specific things.

Such as?

> They’re fundamentally wholly understandable systems that work on a consistent level in terms of the how they do what they do (that is separate from the actual produced output)

Multi billion parameter models are definitely not wholly understandable and I don't think any AI researcher would claim otherwise. We can train them but we don't know how they work any more than we understand how the training data was made.

> I think reasoning, as any layman would use the term, is not accurate to what these systems do.

Based on what?

You’re welcoming to provide counters. I think these are all sufficiently common things that they stand on their own as to what I posit

Look, you're claiming something, it's up to you to back it up. Handwaving what any of these things mean isn't an argument.

I guess computer vision didnt get this memo and it is useless.

>People on here always assert LLMs don't "really" think or don't "really" know without defining what all that even means,

Sure.

To Think: able to process information in a given context and arrive at an answer or analysis. an LLM only simulates this with pattern matching. It didn't really consider the problem, it did the equivalent of googling a lot of terms and then spat something that sounded like an answer

To Know: To reproduce information based on past thinking, as well as to properly verify and reason about with the information. I know 1+1 = 2 because (I'm not a math major, feel free to inject number theory instead) I was taught that arithmatic is a form of counting, and I was taught the mechanics of counting to prove how to add. Most LLM models don't really "know" this to begin with for the reasons above. Maybe we'll see if this study mode is different.

Somehow I am skeptical if this will really change minds, though. People making swipes at the community like this often are not really engaging in a conversation with ideas they oppose.

I have to push back on this. It's the people who constantly assert that LLMs “don't think” who are not engaging in a conversation. It's a thought-terminating cliché.

Unfortunately, even those willing to engage in this conversation still don't have much to converse about, because we simply don't know what thinking actually is, how the brain works, how LLMs work, and to what extent they are similar or different. That makes it all the more vexing to me when people say this, because the only thing I can say in response is “you don't know that (and neither does anyone else)”.

>It's the people who constantly assert that LLMs “don't think” who are not engaging in a conversation.

I'm responding to the conversation. Oftentimes it's engaged on "AI is smarter than me/other people". It's in the name, but "intelligence" is a facade put on by the machine to begin with.

>because we simply don't know what thinking actually is

I described my definition. You can disagree or make your own interpretation, but to dismiss my conversation and simply say "no one knows" is a bit ironic for a person accusing me of not engaging in a conversation.

Philosophy spent centuries trying to answer that question. Mine is a simple, pragmatic approach. Just because there's no objective answer doesn't mean we can't converse about it.

You're just deferring to another vague term "pattern matching".

If I think back to something I was taught in primary school and conclude that 1+1=2 is that pattern matching? Therefore I don't really "know" or "think"?

People pretend like LLMs are like some 80s markov chain model or nearest neighbor search, which is just uninformed.

Do you want to shift the discussion to the definition of a "pattern" or are we going to continue to move the goalpost? I'm trying to respond to your inquiry and instead we're just stuck in minutia.

Yes, to make an apple pie from scratch, we need to first invent the universe. Is that productive conversation to fall into or can we just admit that your dismissing any opinion that goes against your purview?

>If I think back to something I was taught in primary school and conclude that 1+1=2 is that pattern matching?

Yes. That is an example of pattern matching. Let me know when you want to go back to talking about LLMs.

So because I'm pattern matching that means I'm not thinking right? That's the same argument you have for LLMs.