It's really interesting how much the AI harness seems to matter. Going from 48% via Google's official results to 65% is a huge jump. I feel like I'm constantly seeing results that compare models and rarely seeing results that compare harnesses.
Is there a leaderboard out there comparing harness results using the same models?
We probably want to compare the cartesian product of model+harness.
Maybe the future isn't a human-like centralized intelligence but an octopus-like decentralized intelligence where more focus is placed on making the harness itself "smart"
That would be counter to AI company goals. They want harness to be dumb and models to be smart so they can sell models.
Not really. Anthropic for example sells both the harness and the models as a unified kit via Claude Code, it is in their best interest to make sure both parts work as well as possible, via reinforcement learning of previous usage as well for new model performance increases.
but harness are not a moat. They wouldnt have to subsidize their own harness massively if that was the case. Anyone can write a good harness .
That's not true that anyone can write a good harness because the LLM providers have information like prompts that they can RL train off of that someone writing their own harness would not have. Therefore a good and proprietary harness is a moat.
that doesnt answer why claude subsidizes their own harness and bans ppl from using subsidized inference on openclaw ect
Yes it does? They want people to be locked into the Claude Code product.
why do they have "lock" them if its clearly superior to alternatives that merely u se their api.
Because it's a way to make more money in the future. I feel like you're not really getting the difference between what a business does for profit and its technical decisions.
well internet is rife with theories about why anthropic does it. I dont buy that you have it all figured out.
https://en.wikipedia.org/wiki/Bitter_lesson
History indicates you can't tool and harness your way to effectively competing against a smarter model with more compute.
the most cited is terminal bench 2.0, but its also plagued by cheating accusations and benchmaxxing.
somewhat remarkably, claude code ranks last for Opus 4.6 - which may say something about cc, or say something about the benchmark
[0] https://www.tbench.ai/leaderboard/terminal-bench/2.0
Isn't that what terminal-bench does?
I really wish there was! I thought of even creating one but it would be conflict of interest
For my local tests the past few months on the same local model, I’ve found Claude Code to be way better than OpenCode, and OpenCode to be better than Codex.