I understand where you're coming from, and I'd love to have learned about pre-training vs. off-the-shelf base model too. But

> their own internal benchmark that they won't release

If they'd release their internal benchmark suite, it'd make it into the training set of about every LLM, which from a strictly scientific standpoint, invalidates all conclusions drawn from that benchmark from then on. On the other hand, not releasing the benchmark means they could've hand-picked the datapoints to favor them. It's a problem that can't be resolved unfortunately.

I'm not saying SWE-Bench is perfect, and there are reports that suggest there is some contamination of training sets for LLMs with common benchmarks like SWE-Bench. But they publish SWE-bench so anyone can run it and have an open leaderboard where they attribute the results to specific models, not just vague groupings:

https://www.swebench.com/

ARC-AGI-2 keeps a private set of questions to prevent LLM contamination, but they have a public set of training and eval questions so that people can both evaluate their modesl before submitting to ARC-AGI and so that people can evalute what the benchmark is measuring:

https://github.com/arcprize/ARC-AGI-2

Cursor is not alone in the field in having to deal with issues of benchmark contamination. Cursor is an outlier in sharing so little when proposing a new benchmark while also not showing performance in the industry standard benchmarks. Without a bigger effort to show what the benchmark is and how other models perform, I think the utility of this benchmark is limited at best.

In high-security systems, we solved this problem with trusted, independent evaluators who got all the data. They replicate the results themselves. They analyze every artifact for flaws. They also pen test the system offensively. If they say it's good, then maybe it is good or maybe less, obviously bad.

We could have third-party groups with evaluation criteria who don't make models or sell A.I.. Strictly evaluators. Alternatively, they have a different type of steady income with the only A.I. work they're doing being evaluation.