Prior to the advent of LLMs, I had this concept of the 'complexity horizon' - essentially a [hand built] software system will naturally tend to get more and more complex until no-one can understand it - until it meets the complexity horizon. And there it stays, being essentially unmaintainable.
With LLMs, you can race right for that horizon, go right through, and continue far beyond! But then of course you find yourself in a place without reason (the real hell), with all the horror and madness that that entails.