> Engineering quality doesn't disappear when AI writes code. It migrates to specs, tests, constraints, and risk management.
> Code review is being unbundled. Its four functions (mentorship, consistency, correctness, trust) each need a new home.
> If code changes faster than humans can comprehend it, do we need a new model for maintaining institutional knowledge?
The humans we have in these roles today are going to suffer. The problem starts at hiring, because we rewarded memorization of algorithms, and solving inane brain teasers rather than looking for people with skills at systems understanding, code reading (this is a distinct skill) and the ability to do serious review (rather than bike shed Ala tabs over spaces).
LLM's are just moving us from hammers and handsaws to battery powered tools. For decades the above hiring practices were "how fast can you pound in nails" not "are you good at framing a house, or building a chair".
And we're still talking about LLM's in the abstract. Are you having a bad time because you're cutting and pasting from a browser into your VIM instance? Are you having a bad time because you want sub-token performance from the tool? Is your domain a gray box that any new engineer needs to learn (and LLM's are trained, they dont learn).
Your model, your language, your domain, the size of task your asking to be delivered, the complexity of your current code base are as big a part of the conversation. Simply what you code in and how you integrate LLMs really matters to the outcome. -- And without this context were going to waste a lot of time talking past each other.
Lastly, we have 50 years of good will built up with end users that the systems are reliable and repeatable. LLM's are NOT that, even I have moments where it's a stumbling block and I know better. It's one of a number of problems that were going to face in the coming decade. This issue, along side security, is going to erode trust in our domain.
I'm looking forward to moving past the hype, hope and hate, and getting down to the brass tacks of engineering. Because there is a lot of good to be had, if we can just manage an adult conversation!
Same, I'm seeing people having a lot of difficulty working with agents and providing prompts that can have the agent go end-to-end on the work. They just can't write prose and explain a problem in a way that the agent can go out and work and come back with a solution, they can only do the "little change with claude code" workflow and that just makes you less productive.
I don't think the industry is ready or has the manpower to deliver on all the promises they're making and a lot of businesses and people will suffer because of that.
People just need to lower their expectations a bit. There's a large space between "prompting for end-to-end solution" and "little change".
I agree with the spirit of what you're saying, but...
> we have 50 years of good will built up with end users that the systems are reliable and repeatable
There's good yes, but also we've raced to build dystopian bullshit and normalized identity theft because most software is garbage. There might not be as much goodwill as you think. Software eats the world, and many simply feel helpless. The erosion of trust you're predicting has already happened, or never existed IMHO.
LLMs may not 1-shot reliable and repeatable systems, but they're a powerful tool that I hope will end up improving systems overall, for reasons you've mentioned, among others.