Same reason the visual programming paradigm failed, tbe main problem is not the code.
While writing simple functions may be mechanistic, being a developer is not.
'guess some missing answers' is why Waterfall, or any big upfront design has failed.
People aren't simply loading pig iron into rail cars like Taylor assumed.
The assumption of perfect central design with perfect knowledge and perfect execution simply doesn't work for systems which are for more like an organism than a machine.
Waterfall fails when domain knowledge is missing. Engineers won't take "obvious" problems into consideration when they don't even know what the right questions to ask are. When a system gets rebuild for the 3rd time the engineers do know what to build and those basic mistakes don't get made.
Next gen LLMs, with their encyclopedic knowledge about the world, won't have that problem. They'll get the design correct on their first attempt because they're already familiar with the common pitfalls.
Of course we shouldn't expect LLMs to be a magic bullet that can program anything. But if your frame of reference is "visual programming" where the goal is to turn poorly thought out requirements into a reasonably sensible state machine then we should expect LLMs to get very good at that compared to regular people.
LLMs are NLP, what you are talking about is NLU, which has been considered an AI-hard problem for a long time.
I keep looking for discoveries that show any movement there. But LLMs are still basically pattern matching and finding.
They can do impressive things, but they actually have no concept of what the 'right thing' even is, it is statistic not philosophy.