You can nitpick the idea that this or that model does or does not return the same thing _every_ time, but "don't anthropomorphize the statistical model" is just correct.
People forget just how much the human brain likes to find patterns even when no patterns exist, and that's how you end up with long threads of people sharing shamanistic chants dressed up as technology lol.
To be clear re my original comment, I've noticed that LLMs behave this way. I've also independently read that humans behave this way. But I don't necessarily believe that this one similarily means LLMs think like humans. I didn't mean to anthropomorphize the LLM, as one parent comment claims.
I just thought it was an interesting point that both LLMs and humans have this problem - makes it hard to avoid.