The vulnerability framing is like saying SQL injection was unfixable in 2005. Security and defense will always lag behind new technology shifts and platform shifts. Just like web security did not catch up until two decades later from the internet, the early days of the internet were rife with viruses. Do people still remember LimeWire? But we can all be aware of these risks and take necessary precautions. It's just like when you install antivirus with your computer or you have antivirus for your browser. You also need an antivirus for your AI agent.
In actuality "Antivirus" for AI agents looks something more like this:
1. Input scanning: ML classifiers detect injection patterns (not regex, actual embedding-based detection) 2. Output validation: catch when the model attempts unauthorized actions 3. Privilege separation: the LLM doesn't have direct access to sensitive resources
Is it perfect? No. Neither is SQL parameterization against all injection attacks. But good is better than nothing.
(Disclosure: I've built a prompt protection layer for OpenClaw that I've been using myself and sharing with friends - happy to discuss technical approaches if anyone's curious.)
> Is it perfect? No. Neither is SQL parameterization against all injection attacks. But good is better than nothing.
What injection attack gets through SQL parameterization?
If you must generate nonsense with an LLM, at least proofread it before posting.