The bit about strict guardrails helping LLMs write better code matches what we have been seeing. We ran the same task in loose vs strict lint configurations and the output quality difference was noticeable.
What was surprising is that it wasn't just about catching errors after generation. The model seemed to anticipate the constraints and generated cleaner code from the start. My working theory is that strict, typed configs give the model a cleaner context to reason from, almost like telling it what good code looks like before it starts.
The piece I still haven't solved: even with perfect guardrails per file, models frequently lose track of cross-file invariants. You can have every individual component lint-clean and still end up with a codebase that silently breaks when components interact. That seems like the next layer of the problem.
[dead]
We've been building our frontend with AI assistance and the bottleneck has shifted from writing code to reviewing it. Faster tooling helps, but I wonder if the next big gain is in tighter feedback loops — seeing your changes live as the AI generates them, rather than waiting for a full build cycle.
Exactly this. And what makes it compound is that you can not build muscle memory for patterns you have already reviewed. Same prompt, different output every time, so every generation is a fresh read even if you have seen similar code before.
The feedback loop angle is interesting. Real-time linting during generation rather than after could help catch issues earlier, but I think the deeper problem is the non-determinism. Even with instant feedback, if the output changes on each run you are still starting from scratch each time.
Have you found anything that actually reduces the review time per component, or is it mostly about finding issues faster?
Are your frontend builds actually so slow that you're not seeing them live? I've gotten used to most frontend builds being single digit seconds or less for what feels like a decade now.
Not build speed, the human review cycle. When the AI generates a component, I still need to read through it manually to make sure it does what I intended, handles edge cases, and fits the existing patterns. That takes 8-12 minutes per component regardless of how fast the build is.
The slow part is not the computer. It is me reading AI-generated code line by line before I trust it enough to ship.