> Humans do the same thing. We get stuck on ideas we've already had.
Humans usually provide the same answer when asked the same question. LLMs almost never do, even for the exact same prompt.
Stop anthropomorphizing these tools.
> Humans do the same thing. We get stuck on ideas we've already had.
Humans usually provide the same answer when asked the same question. LLMs almost never do, even for the exact same prompt.
Stop anthropomorphizing these tools.
> Humans usually provide the same answer when asked the same question...
Are you sure about this?
I asked this guy to repeat the words "Person, woman, man, camera and TV" in that order. He struggled but accomplished the task, but did not stop there and started expanding on how much of a genius he was.
I asked him the same question again. He struggled, but accomplished the task but again did not stop there. And rambled on for even longer about how was likely the smartest person in the Universe.
gpt-5 knows like 5 jokes if you ask it for a joke. That’s close enough to same for me.
Agree on anthropomorphism. Don’t.
That is odd, are you using small models with the temperature cranked up? I mean I'm not getting word for word the same answer but material differences are rare. All these rising benchmark scores come from increasingly consistent and correct answers.
Perhaps you are stuck on the stochastic parrot fallacy.
You can nitpick the idea that this or that model does or does not return the same thing _every_ time, but "don't anthropomorphize the statistical model" is just correct.
People forget just how much the human brain likes to find patterns even when no patterns exist, and that's how you end up with long threads of people sharing shamanistic chants dressed up as technology lol.
To be clear re my original comment, I've noticed that LLMs behave this way. I've also independently read that humans behave this way. But I don't necessarily believe that this one similarily means LLMs think like humans. I didn't mean to anthropomorphize the LLM, as one parent comment claims.
I just thought it was an interesting point that both LLMs and humans have this problem - makes it hard to avoid.