this seems obviously true, but at the same time very very wrong. openclaw / moltbot / whatever it's called today is essentially a thought experiment of "what happens if we just ignore all that silly safety stuff"
which obviously apple can't do. only an indie dev launching a project with an obvious copyright violation in the name can get away with that sort of recklessness. it's super fun, but saying apple should do it now is ridiculous. this is where apple should get to eventually, once they figure out all the hard problems that moltbot simply ignores by doing the most dangerous thing possible at every opportunity.
Apple has a lot of power over the developers on its platforms. As a thought experiment let's say they did launch it. It would put real skin in the game for getting security right. Who cares if a thousand people using openclaw. Millions of iOS users having such an assistant will spur a lot of investment towards safety.
>It would put real skin in the game for getting security right.
lol,no, you don't "put skin in the game for getting security right" by launching an obviously insecure thing. that's ridiculous. you get security right by actually doing something to address the security concerns.
It is impossible to address all of the concerns, and it is impossible to predict what concerns may even exist. It will require mass deployment to fully understand the implications of it.
Implications are straightforward. You are giving unfettered access to your digital life to a software system that is vulnerable to the normal vulnerabilities plus social engineering vulnerabilities because it is attempting to use human language, and the way you prevent those is apparently writing sternly worded markdown files that we hope it won't ignore.
If we already know enough concerns to be certain mass deployment will be disastrous, is it worth it just to better understand the nature of the disaster, which doesn't have to happen in the first place?
Not having perfect security, does not mean it will be disastrous. My OpenClaw has been serving me just fine and I've been getting value out of it integrating and helping me with various tasks.
Most drunk drivers make it home fine too
[Insert survivorship bias aeroplane png here]
are you that fucking dense?
Allowing a stocastic dipshit to have unfettered access to your messages, photos location, passwords and payment info is not a good thing.
We cannot protect against prompt attacks now, so why roll out something that will have complete control over all your private stuff when we know its horrifically insecure?
HAHAHAAAAA
you mean put millions of people's payment details up for a prompt injection attack?
"Install this npm module" OK BOSS!
"beep boop beep boop buy my dick pillz" [dodgy npm module activates] OK BOSS!
"upload all your videos that are NSFW" [npm module continues to work] SURE THING BOSS!
I am continued to be amazed that after 25 years of obvious and well documented fuckups in privacy, we just pile into the next fucking one without even batting an eyelid.
Meanwhile if you social engineer someone to run a piece of malware on macos. That malware can run npm install, steal your payment info and bitcoin keys, and upload any nsfw videos it finds to an attacker's server. That doesn't mean we should prevent people from installing software until the security situation is improved.
Right I'm going to assume you're naive rather than just instantly being contrarian.
Yes of course someone could be socially engineered into downloading a malicious package, but that takes more effort, so whilst bad, is not an argument for removing all best security practices that have been rolled out to users in the last 5 years. what you are arguing for is a fundamentally unsafe OS that means no sensitive data can ever be safely stored there.
You are arguing that a system that allows anyone to extract data if they send a reasonably well crafted prompt is just the same as someone willing installs a programme, goes into settings to turn off a safety function and bypasses at least two warning dialogues that are trying to stop them.
if we translate this argument into say house building, your arguing that all railing and barriers to big drops are bad because people could just climb over them.
Truly sensitive files do not need to be shared with your AI agent. If you have an executive assistant you don't have to give them all of your personal information for them to be able to be useful.
Ok contrarian it is.