I absolutely buy that theory.
I believe that we the normalization of catastrophe was a natural consequence of VC money: VC don't care about structurally sound companies, in particular structurally sound products, what they want is a unicorn that can produce a good enough prototype and exit at enormous profit.
Consequently, VC-backed companies invest in tools that make prototyping easier and in developers who are hopefully good at prototyping (or at least write code quickly), and ignore everything else. And since the surviving VC-backed companies become giants (or at least everybody believes that they will), everybody follows their lead. And of course, LLMs are the next stage of that.
I've seen this in various domains. I've seen this with IoT devices coded with the clear (but unsaid) assumption that they will never be upgraded. I've seen this with backends coded with the clear (but unsaid) assumption that the product will have failed before any security hole is exploited. I've seen software tools developed and shipped with the clear (but unsaid) assumption that they're meant to woo the investors, not to help the users.
We're going to pay for this, very hard. By doing this, we have turned cyber warfare – something that was a fantasy a few decades ago – into an actual reality. We have multiplied by several orders of magnitude the amount of resources we need to run basic tools.
And it's a shame, because there _is_ a path towards using LLMs to produce higher quality code. And I've seen a few teams invest in that, discreetly. But they're not making headlines and it's even possible that they need to be stealthy in their own orgs, because it's not "productive" enough.