Both can be true. There were probably a significant number of stranded motorists that were rescued by horse-powered conveyance. And eventually cars got more convenient and reliable.

I just wouldn't want to be responsible for servicing a guarantee about the reliability of early cars.

And I'll feel no sense of vindication if I do get that support case. I will probably just sigh and feel a little more tired.

Yes, the whole point that it is true. But only for a short window.

So consider differing perspectives. Like a teenage kid that is hanging around the stables, listening to the veteran coachmen laugh about the new loud, smoky machines. Proudly declaring how they'll be the ones mopping up the mess, picking up the stragglers, cashing it in.

The career advice you give to the kid may be different than the advice you'd give to the coachman. That is the context of my post: Andrew Ng isn't giving you advice, he is giving advice to people at the AI school who hope to be the founders of tomorrow.

And you are probably mistaken if you think the solution to the problems that arise due to LLMs will result in those kids looking at the past. Just like the ultimate solution to car reliability wasn't a return to horses but rather the invention of mechanics, the solution to problems caused by AI may not be the return to some software engineering past that the old veterans still hold dear.

I don't know about what's economically viable, but I like writing code. It might go away or diminish as a viable profession, which might make me a little sad. There are still horse enthusiasts who do horse things for fun.

Things change, and that's ok. I guess I just got lucky so far that this thing I like doing just so happens to align with a valuable skill.

I'm not arguing for or against anything, but I'll miss it if it goes away.