Accountability and responsibility for the AI chatbot/tool/agent still lie solely with the human operator. This is an excuse to try and deflect blame, rather than actually identify and prevent the root causes which led to the error.
If the human operator cannot provide the necessary level of accountability - for example, because the agent acts too quickly, or needs high-level permissions to do the work that it's been asked to do - then the human needs to make the tool operate at a level where they can provide accountability - such as slowing it down, constraining it and answering permission prompts, and carefully inspecting any dangerous tool calls before they happen. You can't just let a car drive itself at 300mph and trust the autopilot will work - you need to drive it at a speed where you can still reasonably take over and prevent unwanted behaviour.
Also: AIs cannot confess; they do not have access to their "thought process" (note that reasoning traces etc. do not constitute "internal thought processes" insofar as those can even be said to exist), and can only reconstruct likely causes from the observed output. This is distinct from human confessions, which can provide additional information (mental state, logical deductions, motivations, etc.) not readily apparent from external behaviour. The mere fact that someone believes an AI "confession" has any value whatsoever demonstrates that they should not be trusted to operate these tools without supervision.
[dead]