> most tech debt isn’t actually created in the code, it’s created in product meetings. Deadlines. Scope cuts.

> When asked what would help most, two themes dominated

> Reducing ambiguity upstream so engineers aren’t blocked...

I do wonder how much LLMs would help here, this seems to me at least, to be a uniquely human problem. Humans (Managers, leads, owners, what have you) are the ones who interpret requirements, decide deadlines, features and scope cuts and are the ones liable for it.

What could an LLM do to reduce ambiguity upstream? If it was trained with information on requirements, this same information could be documented somewhere for engineers to refer to. If it were to hallucinate or "guess" an answer without talking to a person for clarification, and which might turn out to not be correct, who would be responsible for it? imo, the bureaucracy and waiting for clarification mid-implementation is a necessary evil. Clever engineers, through experience, might try implement things in an open way that can be easily changed for future changes they predict might happen.

As for the second point,

> A clearer picture of affected services and edge cases

> three categories stood out: state machine gaps (unhandled states caused by user interaction sequences), data flow gaps, and downstream service impacts.

I'd agree. Perhaps when a system is complex enough, and a developer is laser focused on a single component of it, it is easy to miss gaps when other parts of the system are used in conjunction with it. I remember a while ago, it used to be a popular take that LLMs were a useful tool for generating unit tests, because of their usual repetitive nature and because LLMs were usually good at finding edge cases to test, some of which a developer might have missed.

---

I will say, it is refreshing to see a take on coding assistants being used on other aspects instead of just writing code, which as the article pointed out, came with its own set of problems (increase Inefficiencies in other parts of the development lifecycle, potential AI-introduced security vulnerabilities, etc.)