I think the operative word people miss when using AI is AGENT.

REGARDLESS of what level of autonomy in real world operations an AI is given, from responsible himan supervised and reviewed publications to full Autonomous action, the ai AGENT should be serving as AN AGENT. With a PRINCIPLE (principal?).

If an AI is truly agentic, it should be advertising who it is speaking on behalf of, and then that person or entity should be treated as the person responsible.

The agent serves a principal, who in theory should have principles but based on early results that seems unlikely.

I think we're at the stage where we want the AI to be truly agentic, but they're really loose cannons. I'm probably the last person to call for more regulation, but if you aren't closely supervising your AI right now, maybe you ought to be held responsible for what it does after you set it loose.

I agree. With rights come responsibilities. Letting something loose and then claiming it's not your fault is just the sort of thing that prompts those "Something must be done about this!!" regulations, enshrining half-baked ideas (that rarely truly solve the problem anyway) into stone.

> but if you aren't closely supervising your AI right now, maybe you ought to be held responsible for what it does after you set it loose.

You ought to be held responsible for what it does whether you are closely supervising it or not.

I don’t think there is a snowball’s chance in hell that either of these two scenarios will happen:

1. Human principals pay for autonomous AI agents to represent them but the human accepts blame and lawsuits. 2. Companies selling AI products and services accept blame and lawsuits for actions agents perform on behalf of humans.

Likely realities:

1. Any victim will have to deal with the problems. 2. Human principals accept responsibility and don’t pay for the AI service after enough are burned by some ”rogue” agent.