Can confirm it's "Not just the GPT's - it's all of the frontier models." who are addicted to that one.
IME the only reliable way around it when using an LLM to create blog-like content is to have actual hard lists of slop to rewrite/avoid. This works pretty well if done if correctly. There's actually not that many patterns (not hundreds, more like dozens) so they're pretty enumerable. On the other hand, you and me would still be able to tell if only rewriting those things.
Overall the number one thing is that the writing is "overly slick". I've seen this expressed in tons of ways but I find slickness to be the most apt description. As if it's a pitch, or a TED presentation script, that has been pored over and perfected until every single word is optimized. Very salesy. In a similar vein, in LLM-written text, everything is given similar importance. Everything is crucial, one of the most powerful X, particularly elegant, and so on.
I find Opus to have the lowest slop ratio, which this benchmark kind of confirms [1], but of course its pricing is a barrier.