This aspect is fascinating
> The breakdown came when another chatbot — Google Gemini — told him: “The scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.”
Presumably, humans had already told him the same thing, but he only believed it when an AI said it. I wonder if Gemini has any kind of special training to detect these situations.