But... why not just run macOS in a VM?
If/since AI agents work continuously, it seems like running macOS in a VM (via the virtualization framework directly) is the most secure solution and requires a lot less verification than any sandboxing script. (Critical feature: no access to my keychain.)
AI agents are not at all like container deploys which come and go with sub-second speed, and need to be small enough that you can run many at a time. (If you're running local inference, that's the primary resource hog.)
I'm not too worried about multiple agents in the same vm stepping on each other. I give them different work-trees or directory trees; if they step over 1% of the time, it's not a risk to the bare-metal system.
Not sure if I'm missing something...
1 limitation is Apple Virtualisation does not offer USB passthrough for connecting to iPhones for iOS development.
For me, it's file system latency on mac os when virtualizing that kills me. Cargo, npm, pip, etc create many small files and there's a high per-file latency on the FS layer