My main concern is not that a direct Claude command is prompt injected to do something evil but that the generated code could be evil. For example what about simply a base64 encoded string of text that is dropped into the code designed to be unpacked and evaluated later. Any level of obfuscation is possible. Will any of these fast scanning heuristics work against such attacks? I can see us moving towards a future where ALL LLM output needs to be scanned for finger printed threats. That is, should AV be running continuous scans of generated code and test cases?
good points.
nah does inspect Write and Edit content before it hits disk - regex patterns catch base64-to-exec chains, embedded secrets, exfiltration patterns, destructive payloads. And base64 -d | bash in a shell command is classified as obfuscated and blocked outright, no override possible.
but creative obfuscation in generated code is not easy to catch with heuristics. Based on some feedback from HN, I'm starting work to extend nah so that when it sees 'python script.py' it reads the file and runs content inspection + LLM with "should this execute?".
full AV-style is a different layer though - nah currently is a checkpoint, not a background process