The choice between human therapist and computer chat is not a choice that most people in the world have. Most humans do not have access to a human therapist.
We should absolutely be talking about how to make LLM systems better at handling critical situations like this. But those that suggest that people should only talk to human therapists about their problems are taking a very “let them eat cake” position.
An LLM based therapist should be tested like any other medical device. Your comment contains an underlying assumption that they are beneficial. That assumption has not been proven. It is just as likely that they are hurting the people they purport to help.
Without a bevy of studies to prove one way or another, their use is unethical at best and actively harmful at worst.
One of the things that I realised in the last years is that technical people lack a certain humanity, I wanted to call it empathy, but it's not that, it's just a complete lack of self awareness of how your actions and the tools you build affect others. Yes something is cool and all, but that doesn't mean that it should be used or put in the hands of people. One of my colleagues said at one point while we were in an AI workshop how we could just use ChatGPT and feed it various employee numbers and make a list of people in our company, rate them and then decide who we should fire.
Now to go back to this, yeah, LLMs are a cool technology, but the way something that is so unstable and is more or less an uncontrollable black box is thrown out there into the wild for anyone to use, just shows a complete lack of awareness from the industry.
This isn't about let them eat cake, what I understand from this position is something along the lines, you can't afford cake, so here's a Russian roulette where you might get a piece of pie (hey, it's free, it's no cake, but it's good) or a piece of garbage or maybe a piece of poisoned pie - and for most of the people that's still something, right?
I guess where I'm coming from is there seems to be a lot of effort and energy available for discouraging and even banning this sort of LLM use without remotely comparable amounts of effort and energy put into figuring out how everyone can get access to mental health care from humans.
I'd bet a lot of money that very soon we'll have LLMs that are guiding people toward outcomes as good as (or better than) human therapists. I'd also bet a lot of money that we'll never manage to actually provide access to human therapists to everyone who could use it.