> The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".
Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?
> The system prompt includes statements about how it doesn't have tools for managing funds.
Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?
What you've been describing are all effects of the cause, which is poor management decisions to have poor support and poor customer service. Clearly those decisions resulted in poor support bot system prompts, too.
To wit: this would likely not have happened if the prompt included something like "in a scenario like this, or any scenario where the customer asks, simply transfer them to a human", and if Anthropic had not decided to have dysfunctional support and customer service.
The feedback from folks here is not that poor decisions can have poor effects. It's 'for the love of god, please stop making poor decisions that repeatedly, invariably, lead to unforced errors like the one in TFA'.