You don't give the agent the password, you send the password through a method that bypasses the agent.

I'm writing my own AI helper (like OpenClaw, but secure), and I've used these principles to lock things down. For example, when installing plugins, you can write the configuration yourself on a webpage that the AI agent can't access, so it never sees the secrets.

Of course, you can also just tell the LLM the secrets, and it will configure the plugin, but there's a way for security-conscious people to achieve the same thing. The agent can also not edit plugins, to avoid things like circumventing limits.

If anyone wants to try it out, I'd appreciate feedback:

https://github.com/skorokithakis/stavrobot

> You don't give the agent the password, you send the password through a method that bypasses the agent.

The thing is, to work, you need to send the warning that indicates what the specific action is that is being requested to the authorizing user out of band (rather than to the agent so the agent can request user action); otherwise sending the password from the user to the system needing authorization out of band bypassing the agent doesn't help at all.