I can't tell you how many times I've seen people write shoddy ambiguous prompts and then blame the LLM for not being able to read their minds.
If you write a prompt with perfect specificity as to what you want done, an agent like "Github Copilot+Claude" can work at about the same level as a senior dev. I do it all day long. It writes complex SQL, complex algorithms, etc.
Saying it only does boilerplate well reminds me of my mother who was brainwashed by a PBS TV show into thinking LLMs can only finish sentences they've seen before and cannot reason thru things.
You're still talking past my points. Look at the example I gave. Does it seem like the problem was due to an ambiguous prompt?
Even if my prompt was ambiguous, the LLM has no excuse producing code that does not type-check, or crashes in an obvious way when run. The ambiguity should affect what the code tries to do, not it's basic quality.
And your use of totalizing adjectives like "zero ambiguity" and "perfect specificity" tells me your arguments are somewhat suspect. There's nothing like "zero" and "perfect" as far as architecturing and implementing code goes.
When it comes to zero ambiguity and perfect specificity here's how I define it: If I gave the same exact prompt wording to a human would there be any questions they'd need to ask me before starting the work? If they need to ask a clarifying question before starting then I wasn't clear, otherwise I was clear. If you want to balk at phrases like "perfectly clear" you're just nit picking at semantics.