In my experience they can definitely write concise and reusable code. You just need to say to them “write concise and reusable code.” Works well for Codex, Claude, etc.

Writing reusable code is of no use if the next iteration doesn’t know where it is and rewrites the same (reusable) code again.

I guide the AI. If I see it produce stuff that I think can be done better, I either just do it myself or point it in the right direction.

It definitely doesn't do a good job of spotting areas ripe of building abstractions, but that is our job. This thing does the boring parts, and I get to use my creativity thinking how to make the code more elegant, which is the part I love.

As far as I can tell, what's not to love about that?

If you’re repeatedly prompting, I will defer to my usual retort when it comes to LLM coding: programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language. It’s generally much faster for me to write the terse language directly than play a game of telephone with an intermediary in the verbose language for it to (maybe) translate my intentions into the terse language.

In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you.

Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe.

> programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language

Why are we uniquely capable of doing that, but an LLM isn't? In plan mode I've been seeing them ask for clarifications and gather further requirements

Important business context can be provided to them, also

We are uniquely capable of doing that because we invented that :) It’s a self-serving definition, a job description.

This isn’t an argument against LLMs capability. But the burden of proof is on the LLMs’ side.

An LLM isn’t (yet?) capable of remembering a long-term representation of the codebase. Neither is it capable of remembering a long-term representation of the business domain. AGENTS.md can help somewhat but even those still need to be maintained by a human.

But don’t take it from me - go compete with me! Can you do my job (which is 90% talking to people to flesh out their unclear business requirements, and only 10% actually writing code)? It so, go right ahead! But since the phone has yet to stop ringing, I assume LLMs are nowhere there yet. Btw, I’m helping people who already use LLM-assisted programming, and reach out to me because they’ve reached their limitations and need an actual human to sanity-check.