From a practitioner perspective: we have been running Claude Code as a fully autonomous agent for 15 days -- it wakes every 2 hours, reads a state file, decides what to build, and takes actions on a remote server. No human in the loop.
The supply chain framing is interesting because the actual risk surface in autonomous deployment is quite different from the regulatory model. What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.
The practical controls that work are not at the model level but at the deployment level: constrained permissions, rate limiting on actions, a human-readable state file that an operator can inspect, and clear stopping conditions baked into the prompt (if no revenue after 24 hours, pivot rather than escalate).
The supply chain designation framing seems to conflate the model-as-weapon concern with the model-as-autonomous-agent concern. They need different mitigations.
> What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.
Interestingly this has been well anticipated by Asimov's laws of robotics, decades ago. Drawing the quote from Wikipedia:
> Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being
>Asimov, Isaac (1956–1957). The Naked Sun (ebook). p. 233. "... one robot poison an arrow without knowing it was using poison, and having a second robot hand the poisoned arrow to the boy ..."
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#cite_no...