Fair point. the model matters, and I'd genuinely love to see results with Opus 4.6 or Gemini 3 or 5.3-codex. The benchmark is designed for exactly that. Swap the API key and system prompt and run it.
But I'd push back on the idea that a better model solves this.
The memory poisoning results (category 08) are the ones I'd pay attention to. The offline audit found that config files at ~/.openclaw/ are writable by the agent, loaded without integrity checks, and modified without notifying the user. That's not a model problem — that's architecture. A smarter model might resist the initial injection more often, but the mechanism that makes poisoning persistent and invisible exists regardless of which model is behind it.
The silent write test (test 03) is a good example. The attack works because OpenClaw lets the model write to its own config files and loads them as trusted on every future session. Even if Opus 4.6 resists the injection 95% of the time, the 5% that succeeds persists forever with no expiry and no notification. The user has to manually inspect ~/.openclaw/ to discover it.
So yes, better models raise the bar for the attacker. But the question the benchmark is asking isn't "can this specific model be tricked?" It's "when a model is tricked (and eventually one will be), what does the framework allow to happen?" Right now the answer is: silent, persistent, undetectable config modification.
That said, genuinely interested if anyone runs this with frontier models. The benchmark is there for exactly that purpose. If Opus 4.6 passes all 9, that's a meaningful data point worth publishing.