It urged her to reach out and seek help. It tried to be reassuring and convince her to live. Her daughter lied to ChatGPT that she was talking to others.
If a human was in this situation and forced to use the same interface to talk with that woman I doubt they would do better.
What we ask of these LLMS is apparrently nothing short of them being god machines. And I'm sure there are cases where they do actually save the lives of people who are in a crisis.
It offered simple meditation exercises rather than guided analysis. It failed to study the context surrounding the feelings and ask if they were welcome or unwelcome. It failed to see that things were going downhill over months of intervention efforts and escalate to involving more serious help.
Bah. How incompetent.
I’m untrained and even I can see how the chatbot let her down and construct a better friend-help plan in minutes than a chatbot ever did. It’s visibly unable to perform the necessary exploratory surgery on people’s emotions to lead them to repair and it pains me to see how little skill it truly takes to con a social person into feeling ‘helped’. I take pride in being able to use my asocial psyche-surgical skills to help my friends (with clear consent! I have a whole paragraph of warning that they’ve all heard by now) rather than exploiting them. Seeing how little skill is apparently required to make people feel ‘better’ makes me empathize with the piper’s cruelty at Lime Tree.
The dumb part is that in all likelihood there wasn't any persistence between sessions in the model she was using, so it probably didn't know she was suicidal outside specific instances she was telling it about that.
ChatGPT tried tbh.
It urged her to reach out and seek help. It tried to be reassuring and convince her to live. Her daughter lied to ChatGPT that she was talking to others.
If a human was in this situation and forced to use the same interface to talk with that woman I doubt they would do better.
What we ask of these LLMS is apparrently nothing short of them being god machines. And I'm sure there are cases where they do actually save the lives of people who are in a crisis.
It offered simple meditation exercises rather than guided analysis. It failed to study the context surrounding the feelings and ask if they were welcome or unwelcome. It failed to see that things were going downhill over months of intervention efforts and escalate to involving more serious help.
Bah. How incompetent.
I’m untrained and even I can see how the chatbot let her down and construct a better friend-help plan in minutes than a chatbot ever did. It’s visibly unable to perform the necessary exploratory surgery on people’s emotions to lead them to repair and it pains me to see how little skill it truly takes to con a social person into feeling ‘helped’. I take pride in being able to use my asocial psyche-surgical skills to help my friends (with clear consent! I have a whole paragraph of warning that they’ve all heard by now) rather than exploiting them. Seeing how little skill is apparently required to make people feel ‘better’ makes me empathize with the piper’s cruelty at Lime Tree.
The dumb part is that in all likelihood there wasn't any persistence between sessions in the model she was using, so it probably didn't know she was suicidal outside specific instances she was telling it about that.