Yes, to be completely honest this is a vibe coded project and I'm by no means a security expert. This was more of a fun, side project/experiment based on a shower thought. I admit it's not good/disingenuous to imply security knowledge, but for what it's worth, I just prompted Claude to research the latest papers on prompt injection and it made the claims on its own. Again this should not be an excuse for not reviewing the AI's output more carefully, so in the future I'll be more careful with LLM output and also present it as a vibe-coded project. Apologies, I'm just a noob in prompt injection security who doesn't know what he's doing :(
There's absolutely no problem with not knowing what you're doing! Just, you know, own it.
Part of what I find exhausting about projects like this is I can't see any evidence of the person who ostensibly created it. No human touch whatsoever - it's a real drag to read this stuff.
By all means, vibe code things, but put your personal stamp on it if you want people to take notice.
yes absolutely, updating the page now as we speak!