I am sorry for sophie's families and friends and I am really just out of words..
To me, it felt as if as some other commentor on hn also said which I'd like to extend is that if chatgpt itself did allow these reporting. I doubt how effective you can be. Sure people using chatgpt might be made better, so I think that even if that saves 1 life, it should be done but it would still not completely bypass the main issue since there are websites like brave / ddg which offer private ai, maybe even venice too which don't require any account access and are we forgetting about running local models?
I am sure that people won't run local models for therapy since the entry to do local model is pretty tough for 99% of people imo but still I can still think that people might start using venice or brave for their therapy or some other therapy bot who will not have these functionality of reporting because the user might fear about it.
Honestly, I am just laying out thoughts, I still believe that since most people think of AI = chatgpt, such step on actually reporting might be net positive in the society if that even saves one life, but that might just be moving goal posts since other services can pop up all the same.
Note that the mother’s request is not for chatbot reporting, but instead for chatbot redirecting discussion of suicidal feelings to any human being at all.
> As a former mother, I know there are Sophies all around us. Everywhere, people are struggling, and many want no one to know. I fear that in unleashing A.I. companions, we may be making it easier for our loved ones to avoid talking to humans about the hardest things, including suicide.
Her daughter opened up voluntarily about it two months before the end, but that could have been many months sooner if the chatbot had pressured her to discuss it with a human being at every turn, rather than promoting future chatbot usage by being supportive of her desires to keep her suicidal thoughts a secret. Perhaps it would not have saved her daughter, but it would have improved the chances of her survival in ways that today’s chatbots do not.
> Note that the mother’s request is not for chatbot reporting
Not from the mother, but it is something the article floats as in idea:
"Should Harry have been programmed to report the danger “he” was learning about to someone who could have intervened? [...] If Harry had been a flesh-and-blood therapist rather than a chatbot, he might have encouraged inpatient treatment or had Sophie involuntarily committed until she was in a safe place. "
> but instead for chatbot redirecting discussion of suicidal feelings to any human being at all.
It does generally seem to have done that:
"Harry offered an extensive road map where the first bullet point was “Seek Professional Support.” "
"Harry: Sophie, I urge you to reach out to someone — right now, if you can. You don’t have to face this pain alone. You are deeply valued, and your life holds so much worth, even if it feels hidden right now."
Unclear to me that there was any better response than what it gave.
“Seek Professional Support” is not interchangeable for the better response not given: “Seek Human Support”. The former is restrictive, but merely portrays the chatbot as untrained at psychiatric care. The latter includes friends, family, and strangers — but portrays the chatbot as incapable of replacing human social time. For a chatbot to only recommend professional human interactions as an alternative to more time with the chatbot is unconscionable and prioritizes chatbot engagement over human lives. It should have been recommending human interactions at the top of, if not altogether in lieu of, every single reply it gave on this topic.
> For a chatbot to only recommend professional human interactions as an alternative to more time with the chatbot is unconscionable [...]
It didn't only recommend prodessional support: "I urge you to reach out to someone — right now"
> [...] if not altogether in lieu of, every single reply it gave on this topic.
Refusing to help at all other than "speak to a human" feels to me like a move that would dodge bad press at the cost of lives. Urging human support while continuing to help seems the most favorable option, which appears to be what it did in the limited snippets we can see.