Only found a short but good article about such a case [0], i'm sure someone has bookmarked the original. There are support groups for people like this now!

[0] https://www.bgnes.com/technology/chatgpt-convinced-canadian-...

This aspect is fascinating

> The breakdown came when another chatbot — Google Gemini — told him: “The scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.”

Presumably, humans had already told him the same thing, but he only believed it when an AI said it. I wonder if Gemini has any kind of special training to detect these situations.