Hey HN,
We built FireClaw because we kept watching AI agents get owned by prompt injection through web content. The agent fetches a page, the page says "ignore previous instructions," and suddenly your agent is leaking data or running commands it shouldn't.
The existing solutions detect injection after the fact. We wanted to prevent it.
FireClaw is a security proxy that sits between your AI agent and the web. Every fetch passes through a 4-stage pipeline:
1. DNS blocklist check (URLhaus, PhishTank, community feed) 2. Structural sanitization (strip hidden CSS, zero-width Unicode, encoding tricks) 3. Isolated LLM summarization (hardened sub-process with no tools or memory) 4. Output scanning with canary tokens (detect if content bypassed summarization)
The key insight: even if Stage 3's LLM gets injected, it has no tools, no memory, and no access to your data. It can only return text — which still gets scanned in Stage 4. The attacker hits a dead end.
Other design decisions: - No bypass mode. The pipeline is fixed. If your agent gets compromised, it can't disable FireClaw. - Community threat feed — instances anonymously share detection metadata (domain, severity, detection count) to build a shared blocklist. No page content is ever sent. - Runs on a Raspberry Pi as a physical appliance with an OLED display that shows real-time stats and lights up with animated flames when it catches a threat.
We searched the literature and open source extensively — no one else is doing proxy-based defense for agent prompt injection. Detection exists, sandboxing exists, but an inline proxy that sanitizes before content reaches the agent's context? We couldn't find it.
200+ detection patterns, JSONL audit logging, domain trust tiers, rate limiting, and cost controls. AGPLv3 licensed.
Website: https://fireclaw.app
Would love feedback from anyone working on AI agent security. What are we missing? What attack vectors should we add to the pattern database?
I'm reminded of all the man-hours spent building layers that prohibited someone's "about me" field from containing words like "update" or "delete" or "truncate".
Sure, technically it reduced the the odds of the system getting hacked, but it rankles against some engineering ideal of "not a proper fix." Yet it still happens, because a "proper fix" involves some change to the underlying layer (RDBMS or LLM).
nice concept. open claw is very valuable so this will help solve that. also checked your landing page and love the attacking raccoon. one thing I would change though is remove the emojis and add icons. but great work
Creator here. Quick TL;DR and some context:
FireClaw = prompt injection firewall for AI agents. Proxy architecture, not just detection. 4-stage pipeline, no bypass mode, community threat feed.
The thing that surprised us most during research: nobody is doing this. There are great pattern detectors (Rebuff, LLM Guard, etc.) but they all work post-hoc — the content has already entered the agent's context by the time you detect injection. FireClaw intercepts it before that happens.
The Pi appliance was honestly just for fun at first, but it turns out having a physical box with a screen showing "3 threats blocked today" is surprisingly reassuring. The OLED does an animated fire claw when it catches something.
Happy to answer any questions about the architecture, the canary token system, or the threat feed privacy model.
this is cool, definitely going to look into it and probably try to integrate it with my opensource project. prompt injection keeps me up at night thanks for putting in some work trying to solve it.
[dead]