> Humans who have heard of Monty Hall might also say you should always switch without noticing that the situation is different. That's not evidence that they can't think, just that they're fallible.
At some point we start playing a semantics game over the meaning of "thinking", right? Because if a human makes this mistake because they jumped to an already-known answer without noticing a changed detail, it's because (in the usage of the person you're replying to), the human is pattern matching, instead of thinking. I don't think is surprising. In fact I think much of what passes for thinking in casual conversation is really just applying heuristics we've trained in our own brains to give us the correct answer without having to think rigorously. We remember mental shortcuts.
On the other hand, I don't think it's controversial that (some) people are capable of performing the rigorous analysis of the problem needed to give a correct answer in cases like this fake Monty Hall problem. And that's key... if you provide slightly more information and call out the changed nature of the problem to the LLM, it may give you the correct response, but it can't do the sort of reasoning that would reliably give you the correct answer the way a human can. I think that's why the GP doesn't want to call it "thinking" - they want to reserve that for a particular type of reflective process that can rigorously perform logical reasoning in a consistently valid way.
I'm not sure what your argument is. The common claim that annoys me about LLMs on here is that they're not "really" coming up with ideas but that they're cheating and just repeating something they read on the internet somewhere that was written by a human who can "really" think. To me this is obviously false if you've talked to a SOTA LLM or know a little about how they work.