Hey everyone, Thariq from the Claude Code team.
We've been on this since the bug surfaced. Everyone affected is getting a full refund and an extra grant of usage credits equal to their monthly subscription as our apology. You can see my original post here: https://x.com/trq212/status/2048495545375990245. We’re still working on sending emails to everyone affected.
Our support flow wasn't set up to route a complex bug like this to engineering. We’re hoping to make this better but will take some time. Sorry to everyone caught up in it.
You also seem to have a bug where people get randomly invoiced: https://news.ycombinator.com/item?id=47693679
I got a random invoice for $45.08 back in March, despite not having auto top up enabled. Trying to reach support met with a brick wall. Based on the post I linked to, I'm not the only one facing this problem.
They also have a bug where people get randomly suspended: https://www.reddit.com/r/ClaudeAI/comments/1b82cpu/where_you...
It happened this year to my one and only personal account. The account was one week old. Unique e-mail address. $5 balance for API credits. No usage yet. Suspended and refunded. Appeal denied without explanation.
I did create the account on a VPN because I was using public WiFi at a tech conference. That's probably what tripped their automation.
Using certain types of cards will get you automatically banned, I’ve found that out after getting 3 accounts suspended. I made them all using same VPN and email domain. I’ve been using the 4th account with no issues with a reputable bank debit card.
I also got randomly invoiced $5.00 for absolutely no reason on the 28th. I don't have auto-reload enabled, nor did I explicitly buy extra usage.
Happened to me too but my card didn’t actually get charged, maybe check yours. Also the card in the invoice wasn’t even the card I’m using with Anthropic
My card did get charged.
lol, are they doing stochastic invoicing?
But why did you say that
> I need to let you know that we are unable to issue compensation for degraded service or technical errors that result in incorrect billing routing.
What prevents you from issuing compensations?
As a large language model, their support is not allowed to issue compensation
https://simonwillison.net/2025/Feb/3/a-computer-can-never-be...
I know this is a joke, but Amazon’s bots give me compensation literally all the time when something goes wrong. It’s possible.
Of course its possible, its just a permissions decision.
Same experience. Literally yesterday it refunded me for a thermos shattered by the delivery guys.
Interestingly, the starlink customer service bot has applied credits to my account before.
Perhaps this is a matter of who is being referred to by 'we'.
Obviously someone can do it because it got done.
If the 'we' is referring to some team handling issues it would make more sense. In that case they should have said something along the lines of "I have informed someone who can help"
Does AI using first person pronouns gross anyone else out? If there’s one AI regulation I could get behind it would be banning the use of computer systems to impersonate a human
I don't perceive an AI as impersonating a human if it uses first person pronouns. Emulating is not impersonating. One is behaving similarly, the other is asserting that the similarity implies equivalence.
I have not personally encountered an AI who claimed to be human (as far as I could detect)
I agree with you, but I also envy you for having never encountered an AI scam bot (where someone would hack someone's WhatsApp or other account and use an Ai to get money from them, or even do the "hey sorry I missed your call" scam).
Maybe this is a regional thing, I don't think anyone who I have encountered in real life has mentioned anything like this happening to them.
Wow these were quite common to me personally a few years ago. Still get them time to time but I used to get them weekly. In the US, where scams are pretty rampant.
I have been trying to convince Claude to use "Claude" instead of first-person pronouns, and only recently have gotten it to say stuff like "Claude'll go ahead and take care of that now", but it's very inconsistent (shocking).
Well they hoped this person would walk away and forget about it, died, or something else. That's why.It's how health insurance works in the US.
That's a very categorical statement from support. I get that Anthropic is going to throw out their usual support rules in this case since it has garnered so much negative attention, but I'm very curious how many other people have been over-billed and refused a refund through no fault of their own.
To be fair, that looks like an LLM response.
LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.
Which they, of all companies, are responsible for
You're not wrong.
That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".
It's not hard to imagine how this happens. I assume most people here have used these models extensively.
The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".
The system prompt includes statements about how it doesn't have tools for managing funds.
A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.
> The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".
Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?
> The system prompt includes statements about how it doesn't have tools for managing funds.
Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?
What you've been describing are all effects of the cause, which is poor management decisions to have poor support and poor customer service. It would likely not have happened if the prompt included something like "in a scenario like this, or any scenario where the customer asks, simply transfer them to a human", and if Anthropic had not decided to have dysfunctional support and customer service.
The feedback from folks here is not that poor decisions can have poor effects. It's 'for the love of god, please stop making poor decisions that repeatedly, invariably, lead to unforced errors like the one in TFA'.
I try to avoid jumping on the bandwagon when it's already covered but billing bugs being treated like other software issue and the major comms channel being X (which I can't get to load half the time) is ridiculous.
"Our support flow wasn't set up"
Would be more accurate. It still isn't setup. Talking to a bot as support who only tells you to talk to the bot for support is not actually support at all. It looks like support, but there's no way to ACTUALLY GET support.
Thanks for the follow up here and the transparency.
For those of us not on X, what are the best communication channels for us to follow this sort of communication?
I'd recommend a good credit card like Amex, and a lawyer.
These fucks only respond when they get bad publicity.
Amex, like basically all other card issuers, have essentially stopped giving customers preference in chargebacks since 2020 or so. What used to be solid advice now rings hollow - you’re more likely to be asked for information that not available to you than allowing your chargeback to go through.
Anecdotal but Chase helped me out when my gym kept charging me after I canceled. I kept my cancelation receipt and sent that in and that's all I needed to do.
[flagged]
Could really use a post-mortem to set the story straight. The apparently-hallucinated support response copied-pasted by the submitter showing up in the github issue thread is very misleading without scrutiny
Weekly postmortem at this rate.
It's only "very misleading" if Anthropic has implemented an actual support system in the meantime.
A side aspect of this drama is the root feature which enabled this bug:
> ugh sorry this was a bug with the 3rd party harness detection and how we pull git status into the system prompt
Claude wants to exercise control of how I use the "inclusive volume" that I purchased with my monthly subscription. This harms competition (someone else could write a more efficient or safer coding agent) and is generally not in the best interest of society. Why do we allow this?
This specific case is interesting, because it is so clear cut. There is no cross financing via ads, they already have the infrastructure to measure usage and even the infrastructure to bill extra usage. I also don't see how you can plausible make the argument that restricting usage to their blessed client is necessary for fair use or for the basic structure of their business model (this would be the standard argument for e.g. Youtube: Purposefully degrading the experience of their free client to not support background playback enables the subscription model).
Have a look at https://github.com/anthropics/claude-code/issues/54497
I can’t use Claude Code online at all
I have the same issue when I try to run /ultraplan
I tried /debug as the only input, hoping CC wouldn’t shit the bed and give me some data.
Heck, just saying “hello” causes Claude Code to fail.
I’m thinking of doing a charge back, and creating a new account. Others don’t seem to have this issue.
Sorry but you have to make a separate HN post for them to care. Wait like 2 hours so this one dies down otherwise it might not get to the front page with enough other people dealing with it
I tried and it got no feedback.
Can people please raise this person's comment to the top of HN by upvoting it so this person can get their money back. Because that's where we are right now.
> Our support flow wasn't set up to route a complex bug like this to engineering.
What does that even mean? Does it mean, "our support flow is just an LLM that fobs off customers and puts their issues into the bin"? Or is there some genuine "routing" of simple bugs to engineering which accidentally drops "complex" bugs? Could you drescibe that process, it sounds fascinating?
Also, how is changing a customer's billing based on detecting a certain string in a certain place a "complex" bug? Grep the string, remove the if statement, done. I'd love a post-mortem about why this was a complex bug.
More questions than answers here Thariq.
Is it complex? I was somewhat taken aback by how simple it was. Still very confused as to how it could happen.
Only the weights and the RNG used to select tokens can answer that. You will understand much if you read up on the quality of code in the CC source leak, it's completely vibe coded and the printf fn is genuinely impossible for a human to comprehend.
Please do explain why someone at Anthropic decided, on purpose, to write code that says something along the lines of: "if ( git_history_str contains "HERMES.md" ... )" then { bill more money }
Somebody (or something) wrote this code. This bug wouldn't be happening for any other reason. It's not a glitch, an oversight, a feature gap, or a temporary outage. It is a piece of written code in your system.
Everyone here is upset about the $200, which is probably much less money than the time that engineer spent ranting about the overcharge on GitHub.
The real problem in my mind is that that bit of code existed in the first place.
Why?
Are you vibe coding your billing!?
Without review!?!?
Or worse, a human being decided to add this to your code base? And nobody noticed or flagged it during code review?
Or much, much worse, Anthropic is purposefully ripping off customers?
This deserves a thorough post-mortem.
Would imagine it's the simplest answer: they're flying by the seat of their pants, there's 1000 things happening every day that demand attention and there's not enough of it to go around. They toss their LLM at it, give it a cursory glance, and ship it. A quick glance at the Claude Code source code bears the result of this process out. The fundamental question is, if their model is so powerful, why do they keep fucking up such simple things? We're led to believe this is a serious company with a model so powerful they can't release it to the general public.
Hermes is one of these OpenClaw clones, so this was certainly intentional, not a model hallucinating something.
I think the problem is clear. Anthropic saw their usage go up much more than their capacity could handle. There are a few tried and true solutions to this, like "increase the price" or "restrict signups so you can guarantee service to what you have already sold".
Then there is the "large scale fraud" option, where you materially change and degrade the service you have already sold. Just because you have obfuscated and mislead in how you describe the product you are selling doesn't mean you get to capture the cash flow of 1 year subscriptions then not honor that contract for the full duration.
Late in replying to this, but just wanted to say I found this pretty compelling. I generally think people are too quick to assign to malice what could be assigned to incompetence. In this case I'm not convinced of that anymore especially given their public statements about these third-party harnesses. It does seem unavoidable that they'll have to move away from subscription-based pricing and towards token-based, but they're managing this in a really ham-fisted and user hostile way regardless.
> Hermes is one of these OpenClaw clones
So that's what it is. Reading its README I thought it was another harness like Pi [1], but with built-in memory so it remembers what it learns, and gets more capable the longer it runs.
Like Letta [2], Dirac [3][4] and the other "more experimental harnesses that look interesting but I haven't had time to try out".
1. https://pi.dev/
2. https://www.letta.com/
3. https://dirac.run/
4. https://news.ycombinator.com/item?id=47920787
Mind pointing out where exactly in the contract you were allowed to use OpenClaw?
Non-Claude client access is not permitted in the terms and conditions, except via API key.
The correct implementation of this condition by Anthropic on the server side would be to block usage by non-Claude apps via Claude's authentication mechanism, and allow it via the per-token API key billing.
Instead of a simple 403 error, which would block usage, they silently redirect to a different billing bucket, which is not ethical behaviour especially since it is based on fuzzy heuristics.
I doubt an AI would be stupid enough to write code like that without being explicitly prompted to do so. It's so... specific.
That specific nature would mean it would get caught by even the most cursory of code reviews.
Even if I was just "scanning my eyeballs over the code" without properly reading it, this would jump out as very odd and make me pause.
Vibes were strong dude. Don't blame the dev blame the bots brah. They forgot to use mythos obviously otherwise this wouldn't happen simple mistake.
Anthropic obviously vibe code everything and it shows
Hey Thariq, I love Claude! I use Claude every single day and it has changed my life, which is why I did what I'm about to describe.
Happy to talk privately, but as I detailed here, https://news.ycombinator.com/item?id=47954005 . I've been billed $200 for a Max gift card to a 27 character alphanumeric icloud address that bounces.
I was looking through the system, and there are several UI/UX and process gaps in the gift card and billing order flow that expose Anthropic to significant liability. I'm genuinely not trying to concern troll or make some kind of overwrought threat here. Genuinely trying to be constructive. Let me give you an example.
I sent an email to Anthropic Support outlining the disputed / possibly malicious charge. The AI Agent / Claude instance agreed and replied with,
And then no one followed up, the conversation was closed without recourse and I wasn't allowed to reply.I'm not sure how familiar you are with international trading practises, but in multiple jurisdictions, the AI agent assumed legal liability for Anthropic. It accepted that the charge was unauthorized / fraudulent, stated that redressal was needed, but then failed to offer the means to redress it / didn't allow for the refund to continue.
I am not a lawyer, but based on my understanding of prior cases (I read this kind of stuff for fun, don't ask) – in the EU, the US and Canada, users can approach courts and invoke the doctrine of promissory estoppel (again don't quote me on this, IANAL, just like reading case law). And if enough users are affected / do so, it becomes a deceptive practises issue.
I've been thinking about how to solve this problem, and as strange as it sounds, I think Anthropic already has the tools to make the best customer support service in human history. No exaggeration. I think that this crisis could be an opportunity.
Apparently we are now expected to know by some telepathic mechanism that important customer service announcements are made only on Twitter.
https://xcancel.com/trq212/status/2048495545375990245
hey guys can you please fix claude design? I've been trying to test it tonight and already used up 20% of my usage and all i get is continuous [unknown] missing EndStreamResponse errors (and this is after your status page reflected everything ok).
I have been badly affected - it killed my vibe.
Is there no constraint preventing extra usage billing from being used before regular usage billing has been exhausted?
I’ve had similar terrible experiences with the Claude support bot when my usage limit was disappearing after a few minutes using Sonnet. I asked for help, asked for escalation, asked for a human, anything. All I got was a non-answers from an AI. I won’t spend real money on Claude now. I’m ok with losing $20 if there’s a rug pull of one way or another, but not $200.
Please, please, please hire more humans with the actual ability to do the right thing for support if your AI agents can’t do the job.
[dead]
[flagged]
[flagged]
That being flagged is completely absurd and honestly I believe you're right because I've never seen anything like it on HN. It's completely out of place for that comment to be flagged to death. That isn't natural.
It wasn't flagged. Compare to this comment by the same user that was actually flagged: https://news.ycombinator.com/item?id=47954834 Note the part where it says [flagged] [dead] instead of just [dead].
That seems.. worse? What would've caused this?