Oo, the "pets vs. livestock" analogy really works better than the "craftsmen vs. slop-slinger" arguments.
Because using an LLM doesn't mean you devalue well-crafted or understandable results. But it does indicate a significant shift in how you view the code itself. It is more about the emotional attachment to code vs. code as a means to an end.
I don't think it's exactly emotional attachment. It's the likelihood that I'm going to get an escalated support ticket caused by this particular piece of slop/artisanally-crafted functionality.
Not to slip too far into analogy, but that argument feels a bit like a horse-drawn carriage operator saying he can't wait to pick up all of the stranded car operators when their mechanical contraptions break down on the side of the road. But what happened instead was the creation of a brand new job: the mechanic.
I don't have a crystal ball and I can't predict the actual future. But I can see the list of potential futures and I can assign likelihoods to them. And among the potential futures is one where the need for humans to fix the problems created by poor AI coding agents dwindles as the industry completely reshapes itself.
Both can be true. There were probably a significant number of stranded motorists that were rescued by horse-powered conveyance. And eventually cars got more convenient and reliable.
I just wouldn't want to be responsible for servicing a guarantee about the reliability of early cars.
And I'll feel no sense of vindication if I do get that support case. I will probably just sigh and feel a little more tired.
Yes, the whole point that it is true. But only for a short window.
So consider differing perspectives. Like a teenage kid that is hanging around the stables, listening to the veteran coachmen laugh about the new loud, smoky machines. Proudly declaring how they'll be the ones mopping up the mess, picking up the stragglers, cashing it in.
The career advice you give to the kid may be different than the advice you'd give to the coachman. That is the context of my post: Andrew Ng isn't giving you advice, he is giving advice to people at the AI school who hope to be the founders of tomorrow.
And you are probably mistaken if you think the solution to the problems that arise due to LLMs will result in those kids looking at the past. Just like the ultimate solution to car reliability wasn't a return to horses but rather the invention of mechanics, the solution to problems caused by AI may not be the return to some software engineering past that the old veterans still hold dear.
I don't know about what's economically viable, but I like writing code. It might go away or diminish as a viable profession, which might make me a little sad. There are still horse enthusiasts who do horse things for fun.
Things change, and that's ok. I guess I just got lucky so far that this thing I like doing just so happens to align with a valuable skill.
I'm not arguing for or against anything, but I'll miss it if it goes away.
In my world that isn't inherently a bad thing. Granted, I belong to the YAGNI crowd of software engineers who put business before tech architecture. I should probably mention that I don't think this means you should skip on safety and quality where necessariy, but I do preach that the point of software is to serve the business as fast as possible. I do this to the extend where I actually think that our BI people who are most certainly not capable programmers are good at building programs. They mostly need oversight on external dependencies, but it's actually amazing what they can produce in a very short amount of time.
Obviously their software sucks, and eventually parts of it always escalates into a support ticket which reaches my colleagues and me. It's almost always some form of performance issue, this is in part because we have monthly sessions where they can bring issues they simply can't get to work to us. Anyway, I see that as a good thing. It means their software is serving the business and now we need to deal with the issues to make it work even better. Sometimes that is because their code is shit, most times it's because they've reached an actual bottleneck and we need to replace part of their Python with a C/Zig library.
The important part of this is that many of these bottlenecks appear in areas that many software enginering teams that I have known wouldn't necessarily have predicted. Mean while a lot of the areas that traditional "best practices" call for better software architecture for, work fine for entire software lifecycles being absolutely horrible AI slop.
I think that is where the emotional attachment is meant to fit in. Being fine with all the slop that never actually matters during a piece of softwares lifecycle.