The difference is, an expert engineer would flat-out refuse to do these things and would keep pushing back. Claude may sometimes attempt _one time_ to warn someone, and then (after consent fatigue means they're just blindly clicking "yes"), it ploughs right ahead without further complaint.
Do you really want the Ai to not do the things you tell it?
It only knows what you tell it, if you tell it risky operations are OK, what do you expect?
That depends.
As per my root comment, if you ignore a lot of the marketing of AI and view it as just a tool, then I agree with your point about it doing what you tell it but I still want the tool to help me avoid making mistakes (and I’d like it to work quite hard at that - much harder, it seems, than it currently does). And probably to the extent that it refuses to run dangerous commands for me and tells me to copy/paste them and run them myself if I really want to take the risk.
If, however, we swallow the marketing hook, line and sinker: then yeah, I want the AI to behave like the experienced engineer it’s supposed to be.
An experienced engineer still gets decisions overridden all of the time and has to suck it up or get fired.
True.. though an experienced engineer would also risk getting fired for doing all the other stuff the OP did too. Especially if they made minimal attempts to highlight consequences/outcomes to management in advance..