So many comments going "Well MY llm of choice gives the right answer". Sure, I believe you -- LLM output has LONG been known to vary from person to person, from machine to machine, depending on how you have it set up, and sometimes based on nothing at all.
That's part of the problem, though, isn't it?
If it consistently gave the right answer, well, that would be great! And if it consistently gave the wrong answer, that wouldn't be GREAT, but at least the engineers would know how to fix it. But sometimes it says one thing, sometimes it says another. We've known this for a long time. It keeps happening! But as long as your own personal chatbot gives the correct answer to this particular question, you can cover your eyes and pretend the planet-burning stochastic parrot is perfectly fine to use.
The comparison in one thread to the "How would you feel if you had not eaten breakfast yesterday?" question was a particularly interesting one, but I can't get past the fact that the Know Your Meme page that was linked (which included a VERY classy George Floyd meme, what the actual fuck) discussed those answers as if they were a result of fundamental differences in human intelligence rather than the predictable result of a declining education system. This is something that's only going to get worse if we keep outsourcing our brains to machines.