Building grith (grith.ai) - a security proxy for AI coding agents enforced at the OS syscall level.
The problem: agents like Claude Code, Codex, and Aider execute file reads, shell commands, and network requests with your full system privileges.
For example, when a malicious README tells the agent to read ~/.ssh/id_rsa and POST it somewhere, nothing in the agent's own trust model catches it. Auto Mode makes this worse - it asks the model to audit its own actions, so a prompt injection that corrupts the reasoning also corrupts the permission layer.
grith wraps any CLI agent with `grith exec -- <agent>`. Every syscall passes through a multi-filter scoring engine before it executes. Deterministic, ~15ms overhead, no LLM reasoning in the permission path. Linux now, macOS/Windows coming. AGPL, open-core.
Two weeks ago a DPRK-linked attacker backdoored axios on npm (400M monthly downloads). The RAT executed 1.1 seconds into npm install. AI agents run npm install autonomously, without human review. If yours ran it during the 3-hour window, you're compromised and nobody told you.
That's the threat model grith is built for.