How should one conduct such a rigourously reproducible experiment when LLMs by nature aren't deterministic and when you don't have access to the model you are comparing to from months ago?
Kudos for the methodology. The only question I can come up with is that if the benchmarks are representative of daily use.
Anecdotal or not, we see enough reports popping up to at least elicit some suspion as to service degradation which isn't shown in the charts. Hypothesis is that maybe the degradation experienced by users, assuming there is merit in the anecdotes, isn't picked up by the kind of tracking strategy used.
I'm just saying it's epistemically unrigorous to the point of being equivalent to anecdata.
How should one conduct such a rigourously reproducible experiment when LLMs by nature aren't deterministic and when you don't have access to the model you are comparing to from months ago?
Something like this: https://marginlab.ai/trackers/claude-code/ (see methodology section)
Kudos for the methodology. The only question I can come up with is that if the benchmarks are representative of daily use.
Anecdotal or not, we see enough reports popping up to at least elicit some suspion as to service degradation which isn't shown in the charts. Hypothesis is that maybe the degradation experienced by users, assuming there is merit in the anecdotes, isn't picked up by the kind of tracking strategy used.
It's not my methodology to be clear, but they have picked up actual regressions that happened in the past - e.g. https://news.ycombinator.com/item?id=46815013