Nothing yet. Probably a new kind of model needs to be trained that can find injected prompts, sort if like an immune system for LLMs. Then the sanitized data can be passed to the LLM after.

No real solution for it yet. I would be interested to try to train a model for this but no budget atm.