>People on here always assert LLMs don't "really" think or don't "really" know without defining what all that even means,
Sure.
To Think: able to process information in a given context and arrive at an answer or analysis. an LLM only simulates this with pattern matching. It didn't really consider the problem, it did the equivalent of googling a lot of terms and then spat something that sounded like an answer
To Know: To reproduce information based on past thinking, as well as to properly verify and reason about with the information. I know 1+1 = 2 because (I'm not a math major, feel free to inject number theory instead) I was taught that arithmatic is a form of counting, and I was taught the mechanics of counting to prove how to add. Most LLM models don't really "know" this to begin with for the reasons above. Maybe we'll see if this study mode is different.
Somehow I am skeptical if this will really change minds, though. People making swipes at the community like this often are not really engaging in a conversation with ideas they oppose.
I have to push back on this. It's the people who constantly assert that LLMs “don't think” who are not engaging in a conversation. It's a thought-terminating cliché.
Unfortunately, even those willing to engage in this conversation still don't have much to converse about, because we simply don't know what thinking actually is, how the brain works, how LLMs work, and to what extent they are similar or different. That makes it all the more vexing to me when people say this, because the only thing I can say in response is “you don't know that (and neither does anyone else)”.
>It's the people who constantly assert that LLMs “don't think” who are not engaging in a conversation.
I'm responding to the conversation. Oftentimes it's engaged on "AI is smarter than me/other people". It's in the name, but "intelligence" is a facade put on by the machine to begin with.
>because we simply don't know what thinking actually is
I described my definition. You can disagree or make your own interpretation, but to dismiss my conversation and simply say "no one knows" is a bit ironic for a person accusing me of not engaging in a conversation.
Philosophy spent centuries trying to answer that question. Mine is a simple, pragmatic approach. Just because there's no objective answer doesn't mean we can't converse about it.
You're just deferring to another vague term "pattern matching".
If I think back to something I was taught in primary school and conclude that 1+1=2 is that pattern matching? Therefore I don't really "know" or "think"?
People pretend like LLMs are like some 80s markov chain model or nearest neighbor search, which is just uninformed.
Do you want to shift the discussion to the definition of a "pattern" or are we going to continue to move the goalpost? I'm trying to respond to your inquiry and instead we're just stuck in minutia.
Yes, to make an apple pie from scratch, we need to first invent the universe. Is that productive conversation to fall into or can we just admit that your dismissing any opinion that goes against your purview?
>If I think back to something I was taught in primary school and conclude that 1+1=2 is that pattern matching?
Yes. That is an example of pattern matching. Let me know when you want to go back to talking about LLMs.
So because I'm pattern matching that means I'm not thinking right? That's the same argument you have for LLMs.