Code that you can understand and fix later, is acceptable quality per my definition.

Either way, LLMs are actually high up the quality spectrum as they generate a very consistent style of code for everyone. Which gives it uniformity, that is good when other developers have to read and troubleshoot code.

> Code that you can understand and fix later, is acceptable quality per my definition.

This definition limits the number of problems you can solve this way. It basically means buildup of the technical debt - good enough for throwaway code, unacceptable for long term strategy (growth killer for scale-ups).

>Either way, LLMs are actually high up the quality spectrum

This is not what I saw, it’s certainly not great. But that may depend on stack.

I'm curious were you in an existing code base or a greenfield project?

I've found LLMs tend to struggle getting a codebase from 0 to 1. They tend to swap between major approaches somewhat arbitrarily.

In an existing code base, it's very easy to ground them in examples and pattern matching.

Greenfield. It’s an interesting question though, if on today‘s project some model will perform better tomorrow because of more reference data. I would expect LLMs to lag behind on latest technology, simply because their reference data has more older examples and may not include latest versions of platforms or frameworks. I have seen LLMs breaking on basic CRUD tasks because of that.