There's clearly a gap in how or for what LLM-enthusiasts and I would use LLMs. When I've tried it, I've found it just as frustrating as you describe, and it takes away the elements of programming that make it tolerable for me to do. I don't even think I have especially high standards - I can be pretty lazy for anything outside of work.

I don't view LLMs as a substitute for thinking; I view them as an aid to research and study, and as a translator from pseudocode to syntax. That is, instead of trawling through all the documentation myself and double-checking everything manually, an LLM can pop up a solution of some quality, and if that agrees with how my mental model assumes it should work, I'll accept it or improve on it. And if I know what I want to do but don't know some exact syntax, like has happened in Swift recently as I explore macOS development, an LLM can translate my implementation ideas into something that compiles.

More to the point of the article, though, LLM-enthusiasts do seem to view it as a substitute for thinking. They're not augmenting their application of knowledge with shortcuts and fast-paths; they're entirely trusting the LLM to engineer things on its own. LLMs are great at creating the impression that they are suitable for this; after all, they are trained on tons of perfectly reasonable engineering data, and start to show all the same signals that a naïve user would use to tell quality of engineering... just without the quality.