> Everything to do with LLM prompts reminds me of people doing regexes to try and sanitise input against SQL injections a few decades ago, just papering over the flaw but without any guarantees.
With the key difference being that it's possible to do this correctly with SQL (e.g., switch to prepared statements, or in the days before those existed, add escapes). It's impossible to fix this vulnerability in LLM prompts.