> There really is no cost difference between writing shitty code and "perfect" code now; its just a matter of how good you are at describing "perfect" to the LLM.
The consequent is at odds with the antecedent. It's a performative contradiction (if the output were truly "free", the skill of the operator would be a zero-value variable - yet, by requiring skill, you acknowledge a cost) as I prove below
> The cost of "perfect" is only perhaps a few fractions of a cent higher than shitty.
Is your cost model accounting for the cost of specification, of review and additional cycles required if review fails or the specification itself needs to be adjusted?
> If we simply tell the agent these rules ahead of time, we get "perfect", consistent code each time
No, in the simplest case, your cost of perfection is simply moving up the chain of abstraction from implementation (coding) to design and specification. In reality it also splits and moves a part of that cost downstream to verification.
This isn't some special, magical insight I have, I'm reiterating Tesler's Law right back to you.
I also encourage you to read software history - for decades it has been trivial to split out perfectly working CRUD from an ER and UML diagram, no LLM necessary. The insight is understanding why we continue to hire cheap human labor to spit out CRUD instead of using those tools.
The cost of software is, and always has been, in the figuring out the intent, not the generation of syntax.
I wish pg was more active on HN - I expect this is one of the reasons why he wanted founders to have and share the painpoints of their (potential) customers. Figuring out the intent is expensive. Mistake the intent and the best case scenario is a pivot.
[flagged]