I'd challenge your assumption that quality is collapsing.
I remember the good old days where nobody unit tested, there were no linters or any focus on quality tools in IDEs. Gang of four patterns we take for granted were considered esoteric gold plating.
Sure, memory usage is high, but hardware is cheap.
That’s fair, but the difference isn’t about whether we have linters or not. It’s about outcomes.
In the ’90s, inefficiency meant slower code. Today it means 32GB RAM leaks in calculator apps, billion-dollar outages from a missing array field, and 300% more vulnerabilities in AI-generated code.
We’ve automated guardrails, but we’ve also automated incompetence. The tooling got better, the results didn’t.