> Put another way, if you don't know what correct is before you start working then no tradeoff exists.

This goes out the window the first time you get real users, though. Hyrum's Law bites people all the time.

"What sorts of things can you build if you don't have long-term sneaky contracts and dependencies" is a really interesting question and has a HUGE pool of answers that used to be not worth the effort. But it's largely a different pool of software than the ones people get paid for today.

> This goes out the window the first time you get real users, though.

Not really. Many users are happy for their software to change if it's a genuine improvement. Some users aren't, but you can always fire them.

Certainly there's a scale beyond which this becomes untenable, but it's far higher than "the first time you get real users".

But that's not what this is about:

> For many projects, maybe ~80% of the thinking about how the software should work happens after some version of the software exists and is being used to do meaningful work.

Some version of the software exists and now that's your spec. If you don't have a formal copy of that and rigorous testing against that spec, you're gonna get mutations that change unintended things, not just improvements.

Users are generally ok with - or at least understanding of - intentional changes, but now people are talking about no-code-reading workflows, where you just let the agents rewrite stuff on the fly to build new things until all the tests pass again. The in-code tests and the expectations/assumptions about the product that your users have are likely wildly different - they always have been, and there's nothing inherent about LLM-generated code or about code test coverage percentages that change this.

"Some users will _accept_ "improvements" IFF it doesn't break their existing use cases."

Fixed that for you.