Holy slop.

This does reflect my experience with Claude Code too. It just writes TOO MUCH damn code. It's never able to understand the tiny change that would make the software actually be better without guidance, and at that point I'd rather write it myself.

It's fine for gruntwork though.

my experience as well - it would rather re-invent the wheel over and over

their owners charge per token so...

On the Pro tier, it’s a fixed monthly price with fixed quota per 5 hour window.

That said, every time I’ve tried it, it’s spent ages writing code that barely works, where it keeps writing over-engineered workarounds to obvious errors. Eventually, it gives up and decides broken is good enough, and returns to the prompt. So you still have a point…

It was trained on the code the finest leet coders wrote. I do wish it would look at my existing code base and write more shit code like I write.

Looks like it's mostly tests and AI specs.

It all amounts to chargeable tokens in the end.

Anthropic offers a flat fee subscription.

They offer unlimited flat fee subscriptions? Someone better alert Anthropic:

https://news.ycombinator.com/item?id=44713757

"affecting less than 5% of users"

Conspiracy theories need to at least have a passing compatibility with reality. Anthropic loses money with more tokens used to solve the same problem.

Is it really a conspiracy theory that these companies want to charge by throughput? What exactly is out of the realm of possibility when these companies literally charge by the token...

This isn't directed at you, but rather the general "A!", "No, A is no good. B!" thing that HN does. Lots of people swear by Claude Code on HN; nearly any post that could shoehorn an AI discussion has someone saying "But I just use Claude Code and it works fine!", with others saying that gemini is better if you pay, etc.

The issue is, very few actually publish the AI code. I have, at least three times on HN. I don't pay for AI - well, i put $10 on deepseek to check it out and have spent less than a penny. I mostly use local or copilot. I've never used chatgpt to write code, nor claude, gemini, grok, or meta.

So, the result is, this comes off as:

  "My football team is best because A,B,C!"
  "No, A & B aren't important, C,X,Y are, and my football team has those!"
  "So you agree C is important?"
Anyhow, in support of my point, here's some of my AI output:

https://news.ycombinator.com/item?id=44652138 I used copilot to add static, and fix the digits spoken to singular digits instead of groups, "7, 3, 4" instead of "seven hundred and thirty four." Done with copilot.exe; final version without pops, clicks, and crash at: https://github.com/genewitch/opensource/blob/master/numbers-...

https://github.com/genewitch/opensource/blob/master/specific... and https://github.com/genewitch/opensource/blob/master/markov%3... to convert n-gate to json and then put the json into a markov chain. Done with copilot.exe

https://github.com/genewitch/aider2048clone A local 70b LLM model oneshot with Aider (a tool to write codebases with AI); oneshot means i typed a prompt and then published the output, i didn't edit or change anything or re-prompt.

and the oldest, and my favorite example so far; https://github.com/genewitch/emd A full react app stack - including the node.js 'server.js', done in copilot.exe over the course of ~20 hours. I didn't manually edit the code except for one tiny part where the only math in the code is, and i worked it out on a piece of paper with a pencil, then coded it in myself. i couldn't explain it well enough to copilot for it to produce the code i wanted. Luckily the nuts and bolts of jscript is easy enough, it's all the const and "{}" that i don't "get".

I've linked all of these on HN before, usually in protest to someone else not linking their code and/or complaining that no one links their code.

none of these were "thinking" mode.