Is this measureable? Like code readability scores on the GitHub corpus over time?

Maybe. Personally I've observed an increase of major system and security failures in the past 5 years, especially failures that impact very large tech companies. You could measure these public failures and see if frequency or impact has increased.

The number of security failures now is nothing close to the golden age of malware in the 90s/early 2000s.

The #1 security exploit today is tricking the user into letting you in, because attacking the software is too hard.

You make a strong point, but now we also have smartphones, ioT devices and cloud networks EVERYWHERE and there is tons of shared open source code (supply chain attacks), and there are tons of open-source attacker tools,vuln databases and exploits (see nuclei on github).

Yes, many/most systems now offer some form of authentication, and many offer MFA, but look at the recent Redis vulns -- yet there are thousands of Redis instances vulnerable to RCE just sitting on the public internet right now.

Bah.

It's #1 one because it's easier than the alternative. But the alternative is also not hard. It's just not worth the effort.

The complaint is not about the readability of the code but of the quality and cost effectiveness of the deployed software.

Code readability has nothing to do with it.

I suppose it could be quantified by the amount of financial damage to businesses. We can start with high-profile incidents like the CrowdStrike one that we actually know about.

But I'm merely speaking as a user. Bugs are a daily occurrence in operating systems, games, web sites, and, increasingly, "smart" appliances. This is also more noticeable since software is everywhere these days compared to a decade or two ago, but based on averages alone, there's far more buggy software out there than robust and stable software.