The website copy is obviously generated, and has not been reviewed for correctness.
The website trumpets "25+ curated prompt injection patterns from leading security research". The README of the linked Github promises: "100+ curated injection patterns from JailbreakBench".
None of the research sources are actually linked for us to review.
The README lists "integrations" with various security-oriented entities, but no such integration is apparent in the code.
The project doesn't earn the credibility it claims for itself. Because the author trusts bad LLM output enough to publish it as their own work, we have to assume that they don't have the knowledge or experience to recognize it as bad output.
Sorry for the bluntness, but there are few classes of HN submission that rankle as much as these polished bits of fluff. My advice: do not use AI to publicly imply abilities or knowledge you don't have; it will never serve you well.
Yes, to be completely honest this is a vibe coded project and I'm by no means a security expert. This was more of a fun, side project/experiment based on a shower thought. I admit it's not good/disingenuous to imply security knowledge, but for what it's worth, I just prompted Claude to research the latest papers on prompt injection and it made the claims on its own. Again this should not be an excuse for not reviewing the AI's output more carefully, so in the future I'll be more careful with LLM output and also present it as a vibe-coded project. Apologies, I'm just a noob in prompt injection security who doesn't know what he's doing :(
There's absolutely no problem with not knowing what you're doing! Just, you know, own it.
Part of what I find exhausting about projects like this is I can't see any evidence of the person who ostensibly created it. No human touch whatsoever - it's a real drag to read this stuff.
By all means, vibe code things, but put your personal stamp on it if you want people to take notice.
yes absolutely, updating the page now as we speak!