There are many explanations why these incidents could be rare but not impossible.
These models are still stochastic and very good at picking up nuances in human speech. It may be simply unlikely to go off the rails like that or (more terrifyingly) it might pick up on some character trait or affectation.
Honestly I'm appalled by the lack of safety culture here. "My plane killed only 1% of pilots" and variations thereof is not an excuse in aerospace, but it seems perfectly acceptable in AI. Even though the potential consequences are more catastrophic (from mass psychosis to total human extinction if they achieve their AGI).
The default mode that untrained people enter when thinking about mental illness is denial, as in, "thank <deity> that will never happen to me". Appallingly, that is ingrained in AI product safety; why would we sacrifice double-digit effectiveness/performance/whatever to prevent negative interactions with the single-digit population who are susceptible to mental illness in the first place?
We just aren't comfortable with the idea that all of us are fragile, and when we think we could endure a situation that would induce self-harm in others, we are likely wrong.