Just treat the LLM as an NLP interface for data input. Still run the inputs against a deterministic heuristic for whether the action is permitted (or depending on the context, even for determining what action is appropriate).

LLMs ignore instructions. They do not have judgement, just the ability to predict the most likely next token (with some chance of selecting one other than the absolutely most likely). There’s no way around that. If you need actual judgement calls, you need actual humans.

Exactly right - the deterministiclayer is the only thing you can actually trust.

We landed on the same pattern: LLM handles the understanding, hard rules handle the permission. The tricky part is maintaining those rules as the agent evolves. How are you managing rule updates code changes every time or something more dynamic?