1. AFAIK most AI coding agents don't do this
2. even if the AI agent itself is sandboxed, if it can make changes to code and you don't inspect all output, it can easily place malicious code that gets executed once you try to run it. The only safe way of doing this is either a dedicated AI development VM where you do all the prompting/tests, there's very limited credentials present (in case it gets hacked), and the changes are only leave the VM after a thorough inspection (eg. PR process).