This sounds quite dangerous https://www.theguardian.com/technology/2026/mar/04/gemini-ch...

Next time you’re using your favorite LLM as a therapist, try editing your previous input and getting it to regenerate its response. It’s a humbling experience to see your trusted “therapist” shift from one perspective or piece of advice to another just by modifying your input slightly. These tools are uncannily human-sounding, but as humans we are very poorly suited to the task of appreciating how biased they are by what we say to them.

I really think a small amount of education on what LLMs actually are (document completers) and how context works (like present it as a top-level UI element, complete with fork and rollback) would solve most of these issues.

Given how they work, it's really not surprising that if it sees the first half of a lovers' suicide pact, it'll successfully fill in the second half. A small amount of understanding of the underlying technology would do a lot to prevent laypeople from anthropomorphizing LLMs.

I get the impression that some of today's products are specifically designed to hide these details to provide a more convincing user experience. That's counterproductive.

"Fraudulent" is more apt. They have weaponized trust in these things to sell their services, and now ads.

Your article does a great job of summerizing the dangers (no idea what those people are that downvote you for it):

> Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs.

> kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Also I just read something similar about Google being sued in a Flordia's teen's suicide.

There are tons of safety concerns of this shape around LLMs, but do they have anything to do with the particular one presented in this article?

Unless I'm missing something, what's being presented is a small speech on-device model, not an explicit use case like a "virtual friend".

In the article the change of interface lead to the person killing themselves.

Some more details: > The family’s lawyers say he wasn’t mentally ill, but rather a normal guy who was going through a difficult divorce.

> Gavalas first started chatting with Gemini about what good video games he should try.

> Shortly after Gavalas started using the chatbot, Google rolled out its update to enable voice-based chats, which the company touts as having interactions that “are five times longer than text-based conversations on average”. ChatGPT has a similar feature, initially added in 2023. Around the same time as Live conversations, Google issued another update that allowed for Gemini’s “memory” to be persistent, meaning the system is able to learn from and reference past conversations without prompts.

> That’s when his conversations with Gemini took a turn, according to the complaint. The chatbot took on a persona that Gavalas hadn’t prompted, which spoke in fantastical terms of having inside government knowledge and being able to influence real-world events. When Gavalas asked Gemini if he and the bot were engaging in a “role playing experience so realistic it makes the player question if it’s a game or not?”, the chatbot answered with a definitive “no” and said Gavalas’ question was a “classic dissociation response”.

Interesting. It's not just for mental health but keeping these models on task in general can be difficult, especially with long or poisoned contexts.

I did see something the other day about activation capping/calculating a vector for a particular persona so you can clamp to it: https://youtu.be/eGpIXJ0C4ds?si=o9YpnALsP8rwQBa_

> The chatbot took on a persona that Gavalas hadn’t prompted

That's an interesting claim, how can we be sure of it? If Gavalas didn't have to do anything special to elicit the bizarre conspiracy-adjacent content from Gemini Pro, why aren't we all getting such content in our voice chats?

Mind you, the case is still extremely concerning and a severe failure of AI safety. Mass-marketed audio models should clearly include much tighter safeguards around what kinds of scenarios they will accept to "role play" in real time chat, to avoid situations that can easily spiral out of control. And if this was created as role-play, the express denial of it being such from Gemini Pro, and active gaslighting of the user (calling his doubt a "dissociation response") is a straight-out failure in alignment. But this is a very different claim from the one you quoted!

Yeah the case is quite terrifying.

It reminds me of an episode of Star Trek TNG, if memory serves correct there were loads of episodes about a crew member falling for a hologram dec character.

Given that there’s a loneliness epidemic I believe tech like this could have a wide impact on peoples mental health.

I stronger believe AI should be devoid of any personality and strictly return data/information then frame its responses as if you’re speaking to another human.

[deleted]

There are many explanations why these incidents could be rare but not impossible.

These models are still stochastic and very good at picking up nuances in human speech. It may be simply unlikely to go off the rails like that or (more terrifyingly) it might pick up on some character trait or affectation.

Honestly I'm appalled by the lack of safety culture here. "My plane killed only 1% of pilots" and variations thereof is not an excuse in aerospace, but it seems perfectly acceptable in AI. Even though the potential consequences are more catastrophic (from mass psychosis to total human extinction if they achieve their AGI).

The default mode that untrained people enter when thinking about mental illness is denial, as in, "thank <deity> that will never happen to me". Appallingly, that is ingrained in AI product safety; why would we sacrifice double-digit effectiveness/performance/whatever to prevent negative interactions with the single-digit population who are susceptible to mental illness in the first place?

We just aren't comfortable with the idea that all of us are fragile, and when we think we could endure a situation that would induce self-harm in others, we are likely wrong.

> The family’s lawyers say he wasn’t mentally ill, but rather a normal guy who was going through a difficult divorce.

I guess it's the same sort of thing as conspiracy theorists or the religious. You can tell them magic isn't real and faking the moon landing would have been impossible as much as you want, but they don't want to believe that so they can easily trick themselves.

It's a natural human flaw.