A supervisor layer of deterministic software that reviews and approve/declines all LLM events? Digital loss prevention already exists to protect confidentiality. Credit card transactions could be subject to limits on amount per transaction, per day, per month, with varying levels of approval.

LLMs obviously can be controlled - their developers do it somehow or we'd see much different output.

Good idea! Unfortunately that requires classical-software levels of time and effort, so it's unlikely to be appealing to the AI hype crowd.

Such a supervisor layer for a system as broad and arbitrary as an internet-connected assistant (clawdbot/openclaw) is also not an easy thing to create. We're talking tons of events to classify, rapidly-moving API targets for things that are integrated with externally, and the omnipresent risk that the LLMs sending the events could be tricked into obfuscating/concealing what they're actually trying to do just like a human attacker would.

[deleted]