> The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side.
Reminds me of an interaction I was forced to have with a chatbot over the phone for “customer service”. It kept apologizing, saying “I’m sorry to hear that.” in response to my issues.
The thing is, it wasn’t sorry to hear that. AI is incapable of feeling “sorry” about anything. It’s anthropomorphisizing itself and aping politeness. I might as well have a “Sorry” button on my desk that I smash every time a corporation worth $TRILL wrongs me. Insert South Park “We’re sorry” meme.
Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?
> Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?
The company can't have it both ways. Either they have to admit the ai "support" is bollocks, or they are culpable. Either way they are in the wrong.
Better than actual human customer agents who give an obviously scripted “I’m sorry about that” when you explain a problem. At least the computer isn’t being forced to lie to me.
We need a law that forces management to be regularly exposed to their own customer service.
I knew someone would respond with this. HN is rampant with this sort of contrarian defeatism, and I just responded the other day to a nearly identical comment on a different topic, so:
No, it is not better. I have spent $AGE years of my life developing the ability to determine whether someone is authentically providing me sympathy, and when they are, I actually appreciate it. When they aren’t, I realize that that person is probably being mistreated by some corporate monstrosity or they’re having a shit day, and I provide them benefit of the doubt.
> At least the computer isn’t being forced to lie to me.
Isn’t it though?
> We need a law that forces management to be regularly exposed to their own customer service.
Yeah we need something. I joke about with my friends creating an AI concierge service that deals with these chatbots and alerts you when a human is finally somehow involved in the chain of communication. What a beautiful world where we’ll be burning absurd amounts of carbon in some sort of antisocial AI arms race to try to maximize shareholder profit.
The world would not actually be improved by having 1000s of customer service reps genuinely authentically feel sorry. You're literally demanding real people to experience real negative emotions over some IT problem you have.
They don't have to be but they at least can try to help. When dealing with automated response units the outcome is the same: much talk, no solution. With a rep you can at lease see what's available within their means and if you are nice to them they might actually be able to help you or at least make you feel less bad about it.
But it would be improved by having them be honest and not say they’re sorry when they’re not.
It's an Americanism. You might enjoy e.g. a Northern European culture more?
Lying means to make a statement that you believe to be untrue. LLMs don’t believe things, so they can’t lie.
I haven’t had the pleasure of one of these phone systems yet. I think I’d still be more irritated by a human fake apology because the company is abusing two people for that.
At any rate, I didn’t mean for it to be some sort of contest, more of a lament that modern customer service is a garbage fire in many ways and I dream of forcing the sociopaths who design these systems to suffer their own handiwork.