> Note that the mother’s request is not for chatbot reporting
Not from the mother, but it is something the article floats as in idea:
"Should Harry have been programmed to report the danger “he” was learning about to someone who could have intervened? [...] If Harry had been a flesh-and-blood therapist rather than a chatbot, he might have encouraged inpatient treatment or had Sophie involuntarily committed until she was in a safe place. "
> but instead for chatbot redirecting discussion of suicidal feelings to any human being at all.
It does generally seem to have done that:
"Harry offered an extensive road map where the first bullet point was “Seek Professional Support.” "
"Harry: Sophie, I urge you to reach out to someone — right now, if you can. You don’t have to face this pain alone. You are deeply valued, and your life holds so much worth, even if it feels hidden right now."
Unclear to me that there was any better response than what it gave.
“Seek Professional Support” is not interchangeable for the better response not given: “Seek Human Support”. The former is restrictive, but merely portrays the chatbot as untrained at psychiatric care. The latter includes friends, family, and strangers — but portrays the chatbot as incapable of replacing human social time. For a chatbot to only recommend professional human interactions as an alternative to more time with the chatbot is unconscionable and prioritizes chatbot engagement over human lives. It should have been recommending human interactions at the top of, if not altogether in lieu of, every single reply it gave on this topic.
> For a chatbot to only recommend professional human interactions as an alternative to more time with the chatbot is unconscionable [...]
It didn't only recommend prodessional support: "I urge you to reach out to someone — right now"
> [...] if not altogether in lieu of, every single reply it gave on this topic.
Refusing to help at all other than "speak to a human" feels to me like a move that would dodge bad press at the cost of lives. Urging human support while continuing to help seems the most favorable option, which appears to be what it did in the limited snippets we can see.