But why did you say that

> I need to let you know that we are unable to issue compensation for degraded service or technical errors that result in incorrect billing routing.

What prevents you from issuing compensations?

As a large language model, their support is not allowed to issue compensation

I know this is a joke, but Amazon’s bots give me compensation literally all the time when something goes wrong. It’s possible.

Of course its possible, its just a permissions decision.

Same experience. Literally yesterday it refunded me for a thermos shattered by the delivery guys.

Interestingly, the starlink customer service bot has applied credits to my account before.

Perhaps this is a matter of who is being referred to by 'we'.

Obviously someone can do it because it got done.

If the 'we' is referring to some team handling issues it would make more sense. In that case they should have said something along the lines of "I have informed someone who can help"

Does AI using first person pronouns gross anyone else out? If there’s one AI regulation I could get behind it would be banning the use of computer systems to impersonate a human

I don't perceive an AI as impersonating a human if it uses first person pronouns. Emulating is not impersonating. One is behaving similarly, the other is asserting that the similarity implies equivalence.

I have not personally encountered an AI who claimed to be human (as far as I could detect)

I agree with you, but I also envy you for having never encountered an AI scam bot (where someone would hack someone's WhatsApp or other account and use an Ai to get money from them, or even do the "hey sorry I missed your call" scam).

Maybe this is a regional thing, I don't think anyone who I have encountered in real life has mentioned anything like this happening to them.

Wow these were quite common to me personally a few years ago. Still get them time to time but I used to get them weekly. In the US, where scams are pretty rampant.

I have been trying to convince Claude to use "Claude" instead of first-person pronouns, and only recently have gotten it to say stuff like "Claude'll go ahead and take care of that now", but it's very inconsistent (shocking).

Well they hoped this person would walk away and forget about it, died, or something else. That's why.It's how health insurance works in the US.

That's a very categorical statement from support. I get that Anthropic is going to throw out their usual support rules in this case since it has garnered so much negative attention, but I'm very curious how many other people have been over-billed and refused a refund through no fault of their own.

To be fair, that looks like an LLM response.

LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.

Which they, of all companies, are responsible for

You're not wrong.

That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".

It's not hard to imagine how this happens. I assume most people here have used these models extensively.

The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

The system prompt includes statements about how it doesn't have tools for managing funds.

A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.

> The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?

> The system prompt includes statements about how it doesn't have tools for managing funds.

Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?

What you've been describing are all effects of the cause, which is poor management decisions to have poor support and poor customer service. It would likely not have happened if the prompt included something like "in a scenario like this, or any scenario where the customer asks, simply transfer them to a human", and if Anthropic had not decided to have dysfunctional support and customer service.

The feedback from folks here is not that poor decisions can have poor effects. It's 'for the love of god, please stop making poor decisions that repeatedly, invariably, lead to unforced errors like the one in TFA'.