GLM 5.1 was the model that made me feel like the Chinese models had truly caught up. I cancelled my Claude Max subscription and genuinely have not missed it at all.
Some people seem to agree and some don't, but I think that indicates we're just down to your specific domain and usage patterns rather than the SOTA models being objectively better like they clearly used to be.
It seems like people can't even agree which SOTA model is best at any given moment anymore, so yeah I think it's just subjective at this point.
Perhaps not even necessarily subjective, just performance is highly task-dependent and even variable within tasks. People get objectively different experiences, and assume one or another is better, but it's basically random.
Unless you're looking at something like a pass@100 benchmark, the benchmarks are confounded heavily by a likelihood of a "golden path" retrieval within their capabilities. This is on top of uncertainties like how well your task within a domain maps to the relevant test sets, as well as factors like context fullness and context complexity (heavy list of relevant complex instructions can weigh on capabilities in different ways than e.g. having a history where there's prior unrelated tasks still in context).
The best tests are your own custom personal-task-relevant standardized tests (which the best models can't saturate, so aiming for less than 70% pass rate in the best case).
All this is to say that most people are not doing the latter and their vibes are heavily confounded to the point of being mostly meaningless.
>just performance is highly task-dependent and even variable within tasks. People get objectively different experiences, and assume one or another is better, but it's basically random.
You are right that this is not exactly subjectivity, but I think for most people it feels like it. We don't have good benchmarks (imo), we read a lot about other people's experiences, and we have our own. I think certain models are going to be objectively better at certain tasks, it's just our ability to know which currently is impaired.
They might be converging somewhat. The ultimate limiting factor is training data. Eventually I think they will converge and then the competition will be on memory and compute efficiency, with the best being the smallest maximally capable model.
And the subjectivity is bidirectional.
People judge models on their outputs, but how you like to prompt has a tremendous impact on those outputs and explains why people have wildly different experiences with the same model.
AI is a complete commodity
One model can replace another at any given moment in time.
It's NOT a winner-takes-all industry
and hence none of the lofty valuations make sense.
the AI bubble burst will be epic and make us all poorer. Yay
Staying power is probably the most important factor, which is why I'm thinking Google eventually takes the crown.
I feel like it's Sonnet level for implementation, but not matching up to Opus for planning.
But I agree it's close enough that it's worth using heavily. I've not cancelled my Claude Max subscription, but I've added a z.ai subscription...
Hmm
Will try it out. Thanks for sharing!
What is your workflow? Do you use Cursor or another tool for code Gen?
I use Opencode, both directly and through Discord via a little bridge called Kimaki.
https://github.com/remorses/kimaki
The value in Claude Code is its harness. I've tried the desktop app and found it was absolutely terrible in comparison. Like, the very nature of it being a separate codebase is already enough to completely throw off its performance compared to the CLI. Nuts.
> The value in Claude Code is its harness
If this was the case then Anthropic would be in a very bad spot.
It's not, which is why people got so mad about being forced to use it rather than better third party harnesses.
Pi is better than CC as a harness in almost every respect.
Anthropic limiting Claude subs to Claude code is what pushed me away in the end because I wanted to keep using Pi.
Just sign up for an AWS account and use the Anthropic models through Bedrock which Pi can use.
API costs are really high compared to subs.
Then you aren't the target market.
What advantage are you saying this has compared to just directly going through the Anthropic provider? They are the same price.
Why use tricks to support a company that is hostile to your use case?
Can you enumerate why?
- Claude Code has repeatedly had enormous token wastage bugs. Its agent interactions are also inefficient. These are the cause of many of the reports of "single prompt blew through 5-hour quota" even though it's a reasonable prompt.
- It still lacks support for industry standards such as AGENTS.md
- Extremely limited customization
- Lots of bugs including often making it impossible to view pre-compaction messages inside Claude Code.
- Obvious one: can't easily switch between Claude and non-Claude models
- Resource usage
More than anything, I haven't found a single thing that Pi does worse. All of it is just straight up better or the same.
I thought the desktop app used the cli app in the background?