> LLMs have a lot of advantages over humans for making conversation.

A lot of those advantages seem to be what enables an LLM to keep pushing people into delusion or triggering latent mental issues : https://www.psychologytoday.com/ie/blog/urban-survival/20250...

Yes, but when I ask it to answer a speculative question it doesn't just respond like Siri does with "Sorry, I can't answer that!"

Of course, every technology comes with risks.