This just seems like the logical consequence of the chosen system to be honest. "Skills" as a concept are much too broad and much too free-form to have any chance of being secure. Security has also been obviously secondary in the OpenClaw saga so far, with users just giving it full permissions to their entire machine and hoping for the best. Hopefully some of this will rekindle ideas that are decades old at this point (you know, considering security and having permission levels and so forth), but I honestly have my doubts.

I think the truth is we don’t know what to do here. The whole point of an ideal AI agent is to do anything you tell it to - permissions and sandboxing would negate that. I think the uncomfortable truth is as an industry we don’t actually know what to do other than say “don’t use AI” or “well it’s your fault for giving it too many permissions”. My hunch is that it’ll become an arms race with AI trying to find malware developed by humans/AI and humans/AI trying to develop malware that’s not detectable.

Sandboxing and permissions may help some, but when you have self modifying code that the user is trying to get to impersonate them, it’s a new challenge existing mechanisms have not seen before. Additionally, users don’t even know the consequences of an action. Hell, even curated and non curated app stores have security and malware difficulties. Pretending it’s a solved problem with existing solutions doesn’t help us move forward.

Skills are just more input to a language model, right?

That seems bad, but if you're also having your bot read unsanitized stuff like emails or websites I think there's a much larger problem with the security model

No, skills are telling the model how to run a script to do something interesting. If you look at the skillshub the skills you download can include python scripts, bash scripts... i didn't look too much further after downloading a skill to get the gist of what they had done to wire everything up, but this is definitely not taking security into consideration

You are confused because the security flaws are so obvious it seems crazy that people would do this. It seems that many of us are experiencing the same perplexity when reading news about this.

"there are security flaws in the 'tell an llm with god perms to do arbitrary things hub'"

Is such an obvious statement it loses all relevant meaning to a conversation. It's a core axiom that no one needs stated.

> Security has also been obviously secondary in the OpenClaw saga so far

s/OpenClaw/LLM/g

[dead]