This seems anecdotal but with extra words. I'm fairly sure this is just the "wow this is so much better than the previous-gen model" effect wearing off.
This seems anecdotal but with extra words. I'm fairly sure this is just the "wow this is so much better than the previous-gen model" effect wearing off.
I've always been a believer in the "post honey-moon new model phase" being a thing, but if you look at their analysis of how often the postEdit hooks fire + how Anthropic has started obfuscating thinking blocks, it seems fishy and not just vibes
I was in this camp as well until recently, in the last 2-3 weeks I've been seeing problems that I wasn't seeing before, largely in line with the issues highlighted in the ticket (ownership dodging, hacky fixes, not finishing a task).
Nope, there is a categorical degradation in quality of output, especially with medium to high effort thinking tasks.
What about the analysis evidences?
You mean the Claude output? The same claude that has "regressed to the point it cannot be trusted"?
What you saying the OP fabricated/hallucinated the evidence?
I'm just saying it's epistemically unrigorous to the point of being equivalent to anecdata.
How should one conduct such a rigourously reproducible experiment when LLMs by nature aren't deterministic and when you don't have access to the model you are comparing to from months ago?
Something like this: https://marginlab.ai/trackers/claude-code/ (see methodology section)
Kudos for the methodology. The only question I can come up with is that if the benchmarks are representative of daily use.
Anecdotal or not, we see enough reports popping up to at least elicit some suspion as to service degradation which isn't shown in the charts. Hypothesis is that maybe the degradation experienced by users, assuming there is merit in the anecdotes, isn't picked up by the kind of tracking strategy used.
It's not my methodology to be clear, but they have picked up actual regressions that happened in the past - e.g. https://news.ycombinator.com/item?id=46815013
I suspect you might be right but I don't really know. Wouldn't these proposed regressions be trivial to confirm with benchmarks?