Well, I'm sure we've all seen code produced by human developers that is 10x worse than what my Claude Code produces (certainly I have), so let's be real. And it's improving scary fast.

I can understand how a mediocre SWE thinks and can anticipate what corners were cut, I have no idea what an LLM is thinking.

This seems like a lack of experience. The more I work with LLMs, the better I get at predicting what they’ll get wrong. I then shape my prompts to avoid the mistakes.

Try working with a bad dev using an LLM.

I think the bar has raised, for sure. There's code I work on from prior seniors that is worse than what our current juniors write, I'm assuming AI is assisting with that but as long as the PR looks good, it's no different to me.

I've noticed that generally OK design patterns and sticking to idiomatic code has increased while attention to small but critical details remains the same or maybe slightly decreased.

Hard disagree. Humans fail in ways I know, can predict, and know where to look for. ML coding assistants fail in all sorts of idiotic ways and thus every damn line needs to be scrutinized.

What actually scares me is the idea that with humans you can manage to follow their train of thought. But if LLM just rewrites everything each time, well that is impossible to follow and then there is same work to be done over and over again each review.

> Humans fail in ways I know, can predict, and know where to look for.

You clearly haven't worked with humans.

[deleted]

[dead]