Why not? The individual grunt knows it is more productive and the managers tolerate a non-zero amount of risk with incompetent or disgruntled workers anyways.
If you have clean access privileges then the productivity gain is worth the risk, a risk that we could argue is marginally higher or barely higher. If the workplace also provides the system then the efficiency in auditing operations makes up for any added risk.
Incompetent workers are liable. Who’s liable when AI makes a big mistake?
Incompetent workers are liable.
But who is when AI makes errors because it’s running automatically?
> But who is when AI makes errors because it’s running automatically?
I'm guessing that would be the human that let the AI run loose on corporate systems.