I think a good way to see it is "AI is good for prototyping. AI is not good for engineering"
To clarify, I mean that the AI tools can help you get things done really fast but they lack both breadth and depth. You can move fast with them to generate proofs of concept (even around subproblems to large problems), but without breadth they lack the big picture context and without depth they lack the insights that any greybeard (master) has. On the other hand, the "engineering" side is so much more than "things work". It is about everything working in the right way, handling edge cases, being cognizant of context, creating failure modes, and all these other things. You could be the best programmer in the world, but that wouldn't mean you're even a good engineer (in real world these are coupled as skills learned simultaneously. You could be a perfect leetcoder and not helpful on an actual team, but these skills correlate).
The thing is, there will never be a magic button that a manager can press to engineer a product. The thing is, for a graybeard most of the time isn't spent around implementation, but design. The thing is, to get to mastery you need experience, and that experience requires understanding of nuanced things. Things that are non-obvious. There may be a magic button that allows an engineer to generate all the code for codebase, but that doesn't replace engineers. (I think this is also a problem in how we've been designing AI code generators. It's as if they're designed for management to magically generate features. The same thing they wish they could do with their engineers. But I think the better tool would be to focus on making a code generator that would generate based on an engineer's description.
I think Dijkstra's comments apply today just as much as they did then[0]
[0] On the foolishness of "natural language programming" https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
I was reading some stuff by Michael A. Jackson (Problem Frames Approach) and T.S.E Maibaum (Mathematical Foundations on Software Engineering) because I also had the impression that too much talk around LLM-assisted programming focuses on program text and annotations / documentation. Thinkers like Donald Schön thought about tacit knowledge-in-action and approached this with design philosophy. when looking at LLM-assisted programming, I call this shaded context.
as you say, software engineering is not only constructing program texts, its not even only applied math or overly scientific. at least that is my stance. I suspect AI code editors have lots of said tacit knowledge baked in (via the black box itself or its engineers) but we would be better off thinking about this explicitly.
100% agree