> Is this saying a quarter of the questions and answers were wrong, this whole time?!

No, they're saying 59.4% of the 27.6% subset had flawed test cases I think.

> If so, how was this ever, in any way, a valid measurement?

Benchmarks essentially aren't, for practical concerns anyways. They don't represent your use case, and they don't represent any and all use cases, they're valid for measuring exactly what's included in the benchmarks, nothing more and nothing less.

I don't understand the ecosystems obsession with using public benchmarks, they hardly ever tell you anything of value. Ok, Qwen 3.5 is 50% better on Benchmark X than Qwen 2.5, does that mean it'll be 50% better for what you're using it for? Very unlikely.

I've been running my own private benchmarks, with test cases I never share anywhere, for the specific problems I'm using LLMs for. Some are based on real, actual cases where a LLM went wrong and I had to adjust the prompt, and over time I've built up a suite.

Most of the times when a new update comes out to a model, it moves maybe 2-3% in my own benchmarks, meanwhile they tout 30-40% increase or something ridiculous in public benchmarks, and we're supposed to believe the models' training data isn't contaminated...

I'm not sure people are really trying to interpret this kind of benchmark as being accurate in gauging the magnitude of improvement. It seems pretty obvious that doubling your score on some benchmark where 100% means "correctly answered all of these specific problems" doesn't translate directly to performing twice as well on all problems. I think what people want from these benchmarks—and what they do get to some extent—is answering the question of "is model A better than model B", especially the subset of "is this local model better than last year's frontier online model".

The marketing departments touting each model do want to claim superiority on the basis of slivers of percentage points, and that's probably always a stronger claim than the test results can reasonably support. And the benchmarks are obviously susceptible to cheating and overfitting. But when the scores aren't saturated and do show a big discrepancy, that kind of result usually seems to align with what people report from actually trying to use the models in the relevant problem space.

the ecosystem obsession with public benchmarks comes from the fact that running benchmark costs, and labs don't test on any given private benchmark

but yeah you're correct anyone optimizing for public-bench rank instead of their own task-distribution eval has been pointing at the wrong thing for a while

still I guess useful signal to know which one model to consider, negative signal is still signal, assuming everyone is gaming benchmark in certain ways, lack of performance do result in a real workload effect

> No, they're saying 59.4% of the 27.6% subset had flawed test cases I think.

That being said, they didn't audit the other 72.4%, right? So it's likely that there are way more flawed problems throughout the full set?