> The first response is always the best and I try to one shot it every time. If I don't get what I want, I adjust the prompt and try again.
I've really noticed this too and ended up taking your same strategy, especially with programming questions.
For example if I ask for some code and the LLM initially makes an incorrect assumption, I notice the result tends to be better if I go back and provide that info in my initial question, vs. clarifying in a follow-up and asking for the change. The latter tends to still contain some code/ideas from the first response that aren't necessarily needed.
Humans do the same thing. We get stuck on ideas we've already had.[1]
---
[1] e.g. Rational Choice in an Uncertain World (1988) explains: "Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: 'Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.'"
I can say: I'm trying to solve problem x. I've tried solutions a,b, and c. Here are the outputs to those (with run commands, code, and in markdown code blocks). Help me find something that works " (not these exact words. I'm way more detailed). It'll frequently suggest one of the solutions I've attempted if they are very common. If it doesn't have a solution d it will go a>b>c>a>... and get stuck in the loop. If a human did that you'd be rightfully upset. They literally did the thing you told them not to, then when you remind them and they say "ops sorry" they do it again. I'd rather argue with a child
A wise mentor once said “fall in love with the problem, not the solution”
When you get the answer you want, follow up with "How could I have asked my question in a way to get to this answer faster?" and the LLM will provide some guidance on how to improve your question prompt. Over time, you'll get better at asking questions and getting answers in fewer shots.
> Humans do the same thing. We get stuck on ideas we've already had.
Humans usually provide the same answer when asked the same question. LLMs almost never do, even for the exact same prompt.
Stop anthropomorphizing these tools.
> Humans usually provide the same answer when asked the same question...
Are you sure about this?
I asked this guy to repeat the words "Person, woman, man, camera and TV" in that order. He struggled but accomplished the task, but did not stop there and started expanding on how much of a genius he was.
I asked him the same question again. He struggled, but accomplished the task but again did not stop there. And rambled on for even longer about how was likely the smartest person in the Universe.
gpt-5 knows like 5 jokes if you ask it for a joke. That’s close enough to same for me.
Agree on anthropomorphism. Don’t.
That is odd, are you using small models with the temperature cranked up? I mean I'm not getting word for word the same answer but material differences are rare. All these rising benchmark scores come from increasingly consistent and correct answers.
Perhaps you are stuck on the stochastic parrot fallacy.
You can nitpick the idea that this or that model does or does not return the same thing _every_ time, but "don't anthropomorphize the statistical model" is just correct.
People forget just how much the human brain likes to find patterns even when no patterns exist, and that's how you end up with long threads of people sharing shamanistic chants dressed up as technology lol.
To be clear re my original comment, I've noticed that LLMs behave this way. I've also independently read that humans behave this way. But I don't necessarily believe that this one similarily means LLMs think like humans. I didn't mean to anthropomorphize the LLM, as one parent comment claims.
I just thought it was an interesting point that both LLMs and humans have this problem - makes it hard to avoid.