No. I assume that
* we're already close to the ceiling of LLM capabilities: LLM providers probably already consumed everything they could. Plus there have been no large improvements for a while, only small and incremental. I believe Mythos-too-dangerous-to-release is a marketing bullshit, until proved otherwise.
* people generate overengineered slop at light speeds. If that is the training base for future models, I doubt they're going to improve significantly, rather quality is going to stagnate at best.