It's your computer and you can do whatever yolo nonsense you want, my dude, but put those goalposts back where they were.

"Don't run that shit on a credentialed box with data you care about" is addressing real threats, not some goofy nation state thing or abstract security research.

If you let the footgun machine constantly generate new code and run it on your computer, you're just asking for data loss and bad shit to happen.

Docker isn't a great solution but it at least doesn't let yolo code delete files or access env vars or read the contents of .ssh/

> my browser extensions, my npm install, my VSCode plugins, my curl | bash

Yeah, and you shouldn't yolo those, either lol. If they didn't come from a trusted source, you need to read through them. If you don't want to, don't use them. That's not paranoia, that's, like, normal.

> If you let the footgun machine constantly generate new code

Are you talking about autonomous LLM projects that automatically write code? Yeah, no shit, I wouldn't run anything like that directly on any machine without sandboxing. My typical LLM use inside my editor is never in self-driving mode, there's not even cruise-control - I tell it exactly when to write, where to write and how to do it. Automated scripts never get run by LLM and don't get to run at all without prior precise and meticulous inspection. I'm not moving goalposts - at worst we're in disagreement on the level of pragmatics vs. paranoia, that's all.

I don't even get why people are so crazy about LLMs generating code - on both sides. LLMs for me personally are such a great tool for investigating things, for finding things, for bridging the gaps - the stuff that happens 10K feet above code writing. By the time I'm done gathering the details, code generation becomes an almost insignificant touch of the whole endeavor.