I think you're right that LLMs are democratizing access to coding, but unless and until AI models reach a point where they can say 'no' to their users, the scenario you're imagining ('endlessly configurable apps') will probably lead to software that collapses under its own complexity.

Years ago, I supported a team of finance professionals who were largely quite competent at coding but knew nothing about software engineering. They had thousands of scripts and spreadsheets: they used version control, but kept separate long-lived branches for client-specific variations of different models. There were no tests for anything; half the tools would break when the clocks changed.

They weren't dumb, but their incentives weren't about building anything we might recognize as an engineered application. I suspect something similar will happen turning end users loose with AI.