How do you prevent these models from reading secrets in your repos locally?
It’s one thing for the ENVs to be user pasted but typically you’re also giving the bots access to your file system to interrogate and understand them right? Does this also block that access for ENVs by detecting them and doing granular permissions?
I configure permission settings within projects.
https://code.claude.com/docs/en/settings#permission-settings
Ah yes - this is the way. Thanks.
this prevents claude from directly reading certain files, but doesn't prevent claude from running a command that dumps the file on stdout and then reading stdout... claude will just try to "cat" the file if it decides it wants to see it.
Yeah - that’s kinda what I was thinking. Unless you’re doing quite granular approvals it gets tricky.
by putting secrets in your environment instead of in your files, and running AI tools in a dedicated environment that has its own set of limited and revocable secrets.
Yes - separate secrets always - but you've still got local or dev secrets. Seems like the above permissions are the right way to go in the end. Thanks.