> But also, man, it seems like we’re headed in a direction where writing code by hand is passé
Do you think humans will be able to be effective supervisors or "review-engineers" of LLMs without hands-on coding experience of their own? And if not, how will they get it? That training opportunity is exactly what the given issue in matplotlib was designed to provide, and safeguarding it was the exact reason the LLM PR was rejected.
(In this response I may be heavily discounting the value of debugging, but unit tests also exist)
This is sort of something that I think needs to be better parsed out, as a lot of engineers hold this perspective and I don’t find it to be precise enough.
In college, I got a baseline familiarity with the mechanics of coding, ie “what are classes, functions, variables.” But eventually, once I graduated college and entered the workforce, a lot of my pedagogy for “writing good code” as it were came from reading about patterns of good code. SOLID, functional-style and favoring immutability. So the impetus for good code isn’t really time in the saddle as much as it is time in the forums/blogs/oreilly-books.
Then my focus shifted more towards understanding networking patterns and protocols and paradigms. Also book-learning driven. I’ll concede that at a micro level, finagling how to make the system stable did require time in the saddle.
But these days when I’m reading a PR, I’m doing static analysis which is primarily not about what has come out of my fingers but what has gone into my brain. I’m thinking about vulnerabilities I’ve read about, corner cases I can imagine.
I’d say once you’ve mastered the mechanics of whatever language you’re programming in, you could become equivalently capable by largely reading and thinking.
> So the impetus for good code isn’t really time in the saddle as much as it is time in the forums/blogs/oreilly-books.
I disagree strongly with this. I read the books, blog-posts, forums, etc early in my career (if you can call it that when I was essentially a teen with a hobby), but didn't fully understand how to apply them, and notably when to apply them, until I had sufficient "time in the saddle". You don't understand the problems that code architecture techniques solve until you've actually had to modify a messy project with a lot of code already written.
> you could become equivalently capable by largely reading and thinking
Theoretically possible, but doing is often orders of magnitude more efficient. You could read reams of books about gardening without actually knowing how to dig a hole.
Part of the deal is that typing forces you to actually pay attention instead of skimming and assuming you got the gist. Following a tutorial by copy-pasting never really worked as well as typing the code, so why would watching an LLM code be any better? I suspect that even as you're running "static analysis" in your head and looking for vulnerabilities, you're using neural pathways forged while coding by hand.
If past patterns are anything to go by, the complexity moves up to a different level of abstraction.
Don't take this as a concrete prediction - I don't know what will happen - but rather an example of the type of thing that might happen:
We might get much better tooling around rigorously proving program properties, and the best jobs in the industry will be around using them to design, specify and test critical systems, while the actual code that's executing is auto-generated. These will continue to be great jobs that require deep expertise and command excellent salaries.
At the same, a huge population of technically-interested-but-not-that-technical workers build casual no-code apps and the stereotypical CRUD developer just goes extinct.
>Do you think humans will be able to be effective supervisors or "review-engineers" of LLMs without hands-on coding experience of their own? And if not, how will they get it?
The wont. Instead either AI will improve significantly or (my bet) average code will deteriorate, as AI training increasingly eats AI slop, which includes AI code slop, and devs lose basic competencies and become glorified semi-ignorant managers for AI agents.
CS degree decline through to people just handing in AI work, will further ensure they don't even known the basics after graduating to begin with.