There is a benchmark for performance work, and I think it is not being optimized by model vendors. The latest result from GSO is that both Opus 4.6 and 4.7 slightly outperforms GPT 5.5. This also matches my experience.

https://gso-bench.github.io/

Tasks are taken from commit histories in public Git repositories which defeats the purpose.