Even if you know the source of the text before you feed it to the model you still need to solve the problem of how to send untrusted text from a user through a model without that untrusted text being able to trigger additional tool calls or actions.
The most credible pattern I've seen for that comes from the DeepMind CaMeL paper - I would love to see a browser agent that robustly implemented those ideas: https://simonwillison.net/2025/Apr/11/camel/
> Even if you know the source of the text before you feed it to the model you still need to solve the problem of how to send untrusted text from a user through a model without that untrusted text being able to trigger additional tool calls or actions.
We're exploring taking the action plan that a reasoning model (which sees both trusted and untrusted text) comes up with and passing it to a second model, which doesn't see the untrusted text and which then evaluates it.
> The most credible pattern I've seen for that comes from the DeepMind CaMeL paper
Yeah we're aware of the CaMeL paper and are looking into it, but it's definitely challenging from an implementation pov.
Also, I see that we said "The browser should clearly separate the user’s instructions from the website’s contents when sending them as context to the model" in the blog post. That should have been "backend", not "model". Agreed that once you feed both trusted and untrusted tokens into the LLM the output must be considered unsafe.
>We're exploring taking the action plan that a reasoning model (which sees both trusted and untrusted text) comes up with and passing it to a second model, which doesn't see the untrusted text and which then evaluates it.
How is this different from the Dual-LLM pattern that’s described in the link that was posted? It immediately describes how that setup is still susceptible to prompt injection.
>With the Dual LLM pattern the P-LLM delegates the task of finding Bob’s email address to the Q-LLM—but the Q-LLM is still exposed to potentially malicious instructions.