If you're not able to review what it generates, you shouldn't be using it (and arguably are the wrong person to be doing the boilerplate work to begin with)
Put differently, I go back to my original comment, where AI is essentially a junior/mid dev that you can express what needs to be done with enough detail. In either case, AI or dev, you'd review and/or verify it.
> Or is it a refusal to do actual software engineering, letting the machine taking care of it (deterministically) and moving up the ladder in terms of abstraction.
One could say the same of installing packages in most modern programming languages instead of writing the code from first principles.
> One could say the same of installing packages in most modern programming languages instead of writing the code from first principles.
I disagree, because libraries define an interface with (ideally) precise, reproducible semantics, that you make use of. They provide exactly what the grandparent is saying, namely a formal abstraction. When you have the choice between a library and an LLM, requiring equal effort, the library is clearly preferable.
When an LLM is more time-efficient at a given coding task, it can be taken as an indication of a lack of a suitable library, tooling, or other abstraction for the use case.