nah addresses "should this action be allowed?" — deterministic classification of tool calls against policies. Smart design, and the no-dependency stdlib approach is the right call for security tooling.

The complementary question most agent safety tools ignore: what happens when things go wrong despite permissions?

I run 8 AI agents managing my company (marketing, accounting, legal, ops). We have a similar permission model — Marketing can't publish claims without Lawyer review, financial changes need CFO sign-off, hard boundaries on auth/compliance. But permissions alone didn't save us when two agents fired parallel writes to the same knowledge graph. Both writes were individually permitted. The second silently overwrote the first. No error, no policy violation — data just disappeared.

What saved us: Erlang-style supervision trees. Memory server detected corruption on load, crashed intentionally, supervisor restarted it in microseconds, auto-repair ran on init. No human at 3am.

Permission guards prevent known-bad actions. Supervision makes unknown-bad outcomes survivable. Most agent safety work focuses exclusively on the first problem.

Wrote up the full race condition mechanics and supervision strategies: https://dev.to/setas/why-erlangs-supervision-trees-are-the-m...