>In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it’s running on is impossible.
This is part of why I think we should reconsider the copyright situation with AI generated output. If we treat the human who set the bot up as the author then this would be no different than if a human had taken these same actions. Ie if the bot makes up something damaging then it's libel, no? And the human would clearly be responsible since they're the "author".
But since we decided that the human who set the whole thing up is not the author, then it's a bit more ambiguous whether the human is actually responsible. They might be able to claim it's accidental.
We can write new laws when new things happen, not everything has to circle back to copyright, a concept invented in the 1700s to protect printers' guilds.
Copyright is about granting exclusive rights - maybe there's an argument to be had about granting a person rights of an AI tool's output when "used with supervision and intent", but I see very little sense in granting them any exclusive rights over a possibly incredibly vast amount of AI-generated output that they had no hand whatsoever in producing.