I think there are two types of people in these conversations:

Those of us who just want to get work done don't care about comparisons to old models, we just want to know what's good right now. Issuing a press release comparing to old models when they had enough time to re-run the benchmarks and update the imagery is a calculated move where they hope readers won't notice.

There's another type of discussion where some just want to talk about how impressive it is that a model came close to some other model. I think that's interesting, too, but less so when the models are so big that I can't run them locally anyway. It's useful for making purchasing decisions for someone trying to keep token costs as low as possible, but for actual coding work I've never found it useful to use anything other than the best available hosted models at the time.

It's high-interest to me because open models are the ultimate backstop. If the SOTA hosted models all suddenly blow up or ban me, open models mitigate the consequence from "catastrophe" to "no more than six to nine months of regression". The idea that I could run a ~GPT-5-class model on my own hardware (given sufficient capex) or cloud hardware under my control is awesome.

For the record, opus 4.6 was released less then a week ago.

That you think corporations are anything close to quick enough to update their communications on public releases like this only shows that you've never worked in corporate