> AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.

Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot; making such an entity completely unsuitable for high risk situations.

This typical AI booster comparison has got to stop.

Love that you needed to make it clear that it is humans that can explain themselves..

Employees can only be held accountable with severe malice.

There is a good chance that the person actually responsible (eg. The ceo or someone delegated to be responsible) will soon prefer to have AIs do the work as their quality can be quantified.

> Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot

You "own" the software it creates which means you're responsible for it. If you use AI to commit crimes you'll go to jail, not the AI.