That's because the LLM assumes you mean for the host to open the door.
As an autistic person I can assure you that people filling in information you neither gave nor implied is the norm for human conversations. In that sense, the LLMs are being human-like.
If you explicitly tell it you didn't forget to have the host open the door it gives you the right answer:
> There is a game show with three closed doors. One has a car and two have goats. I pick a door. Then, without opening my door or any other door, the host asks me if I want to switch my choice to one of the other two doors. What is my best strategy and what are the probabilities?
Gemini: This is a classic probability puzzle known as the Monty Hall problem, though your description presents a slight variation. The optimal strategy depends entirely on whether the host opens a door or not. Your Scenario: The Host Doesn't Open a Door In the situation you've described, where the host asks if you want to switch your choice to one of the other two doors without opening any doors, there is no advantage to switching. Here's a breakdown of the probabilities in your specific scenario: Your initial choice: When you first pick a door, you have a 1 in 3 chance of selecting the door with the car. This means you have a 2 in 3 chance of having picked a door with a goat. The other two doors: The combined probability that the car is behind one of the other two doors is 2/3. Switching your choice: If you decide to switch, you are essentially choosing one of the other two doors. Since there is no new information (a door with a goat has not been revealed), the 2/3 probability is evenly split between those two doors. Therefore, your chance of winning by switching to a specific one of the other doors is 1/3. In this case, your odds of winning are 1/3 whether you stay with your original choice or switch to one of the other doors.
> That's because the LLM assumes you mean for the host to open the door.
LLM's cannot "assume". There is no thinking involved. It sees that the prompt looks like the monty hall problem and it just goes full steam ahead.
>If you explicitly tell it you didn't forget to have the host open the door it gives you the right answer:
That should not be necessary. I asked it a very clear question. I did not mention Monty Hall. This is the problem with LLM's: it did not analyze the problem I gave it, it produced content that is the likely response to my prompt. My prompt was Monty Hall-shaped, so it gave me the Monty Hall answer.
You are saying "ah but then if you prepare for the LLM to get it wrong, then it gets it right!" as if that is supposed to be convincing! Consider the millions of other unique questions you can ask, each with their own nuances, that you don't know the answer to. How can you prevent the LLM from making these mistakes if you don't already know the mistakes it's going to make?
> LLM's cannot "assume". There is no thinking involved. It sees that the prompt looks like the monty hall problem and it just goes full steam ahead.
I think the poster's point was that many humans would do the same thing.
Try a completely different problem, one you invented yourself and see where you get? I'd be very interested to hear the response back here.