That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".

It's not hard to imagine how this happens. I assume most people here have used these models extensively.

The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

The system prompt includes statements about how it doesn't have tools for managing funds.

A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.

> The system prompt includes statements about how it doesn't have tools for managing funds.

Yes, that may be true (and likely is), but it doesn't explain why Anthropic chose to write that, rather than writing something like "in a scenario like this, or any scenario where the customer asks, simply transfer them to a human".

What you've been describing are all effects of the cause, which is poor management decisions to have poor support and poor customer service. The feedback from folks here is not that poor decisions can have poor effects. It's 'for the love of god, stop making these poor decisions that invariably lead to unforced errors like the one in TFA'.