Interesting direction.

One thing we've been seeing with production AI agents is that the real risk isn't just filesystem access, but the chain of actions agents can take once they have tool access.

Even a simple log-reading capability can escalate if the agent starts triggering automated workflows or calling internal APIs.

We've been experimenting with incident-aware agents that detect abnormal behavior and automatically generate incident reports with suggested fixes.

Curious if you're thinking about integrating behavioral monitoring or anomaly detection on top of the sandbox layer.