> That's because the LLM assumes you mean for the host to open the door.
LLM's cannot "assume". There is no thinking involved. It sees that the prompt looks like the monty hall problem and it just goes full steam ahead.
>If you explicitly tell it you didn't forget to have the host open the door it gives you the right answer:
That should not be necessary. I asked it a very clear question. I did not mention Monty Hall. This is the problem with LLM's: it did not analyze the problem I gave it, it produced content that is the likely response to my prompt. My prompt was Monty Hall-shaped, so it gave me the Monty Hall answer.
You are saying "ah but then if you prepare for the LLM to get it wrong, then it gets it right!" as if that is supposed to be convincing! Consider the millions of other unique questions you can ask, each with their own nuances, that you don't know the answer to. How can you prevent the LLM from making these mistakes if you don't already know the mistakes it's going to make?
> LLM's cannot "assume". There is no thinking involved. It sees that the prompt looks like the monty hall problem and it just goes full steam ahead.
I think the poster's point was that many humans would do the same thing.
Try a completely different problem, one you invented yourself and see where you get? I'd be very interested to hear the response back here.