I don’t believe this was ever confirmed by Apple, but there was widespread speculation at the time[1] that the delay was due to the very prompt injection attacks OpenClaw users are now discovering. It would be genuinely catastrophic to ship an insecure system with this kind of data access, even with an ‘unsafe mode’.

These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.

The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.

[1]: https://simonwillison.net/2025/Mar/8/delaying-personalized-s...

Exactly. Apple operates at a scale where it's very difficult to deploy this technology for its sexy applications. The tech is simply too broken and flawed at this point. (Whatever Apple does deploy, you can bet it will be heavily guardrailed.) With ~2.5 billion devices in active use, they can't take the Tesla approach of letting AI drive cars into fire trucks.

This is so obvious I'm kind of surprised the author used to be a software engineer at Google (based on his Linkedin).

OpenClaw is very much a greenfield idea and there's plenty of startups like Raycast working in this area.

Being good at leetcode grinding isn’t the same as being a good product person.

iOS 26 is proof that many product managers at Apple need to find another calling. The usability enshittification in that release is severe and embarrassing.

Or maybe, while being as good as they are at their jobs, they were forced to follow a broken vision with a non-negotiable release date.

And simply chose to keep their jobs.

Which also suggests that they need a new calling

shots fired!

Ouch. You could have taken a statistical approach "google is not known for high quality product development and likely therefore does not select candidates for qualities in product-development domain" - I'm talking too much to Gemini, aren't I?

I'm not that surprised because of how pervasive the 'move fast and break things' culture is in Silicon Valley, and what is essentially AI accelerationism. You see this reflected all over HN as well, e.g. when Cloudflare goes down and it's a good thing because it gives you a break from the screen. Who cares that it broke? That's just how it is.

This is just not how software engineering goes in many other places, particularly where the stakes are much higher and can be life altering, if not threatening.

[deleted]

It is obvious if viewed through an Apple lens. It wouldn't be so obvious if viewed through a Google lens. Google doesn't hesitate to throw whatever its got out there to see what sticks; quickly cancelling anything that doesn't work out, even if some users come to love the offering.

Regardless of how Apple will solve this, please just solve it. Siri is borderline useless these days.

> Will it rain today? Please unlock your iphone for that

> Any new messages from Chris? You will need to unlock your iphone for that

> Please play youtube music Playing youtube music... please open youtube music app to do that

All settings and permission granted. Utterly painful.

You'll need to unlock your iPhone first. Even though you're staring at the screen and just asked me to do something, and you saw the unlocked icon at the top of your screen before/while triggering me, please continue staring at this message for at least 5 seconds before I actually attempt FaceID to unlock your phone to do what you asked.

I think half your examples are made up, or not Apple's fault, but it sounds like what you really want is to disable your passcode.

I LOVE the "complaining about apple ux? no way, YOU'RE the problem / you're doing it wrong / you must not be a mac person".

Thanks for keeping this evergreen trope going strong!

well if you're making complaints that aren't true, or asking for functionality that exists already, your complaints don't seem very credible to me.

"Will it rain today? Sorry, I can't do that while you're driving."

Do you want people being able to command your phone without unblocking? Maybe what you want is to disable phone blocking all together

I want a voice control experience that is functional. I don't want every bad thing that could happen-- especially those that will only happen if I'm careless to begin with-- circumscribing an ever shrinking range, often justified by contrived examples and/or for things much more easily accomplished through other methods.

That would be very useful but is not a trivial problem.

Probably need VoiceID so only authorized people can talk to it.

Oh no, what if they put on Christmas music playlist in February? the horror!

There should exist something between "don't allow anything without unlocking phone first" and "leave the phone unlocked for anyone to access", like "allow certain voice commands to be available to anyone even with phone locked"

Playing music doesn’t require unlocking though, at least not from the Music app. If YouTube requires an unlock that’s actually a setting YouTube sets in their SiriKit configuration.

For reading messages, IIRC it depends on whether you have text notification previews enabled on the lock screen (they don’t document this anywhere that I can see.) The logic is that if you block people from seeing your texts from the lock screen without unlocking your device, Siri should be blocked from reading them too.

Edit: Nope, you’re right. I just enabled notification previews for Messages on the lock screen and Siri still requires an unlock. That’s a bug. One of many, many, many Siri bugs that just sort of pile up over time.

Can it not recognize my voice? I had to record the pronunciation of 100 words when I setup my new iPhone - isn’t there a voice signature pattern that could be the key to unlock?

It certainly should have been a feature up until now. However, I think at this point anyone can clone your voice and bypass it.

But as a user I want to be able to give it permission to run selected commands even with the phone locked. Like I don't care if someone searches google for something or puts a song via spotify. If I don't hide notifications when locked, what does it matter that someone who has my phone reads them or listens to them?

Personal Voice learns to synthesize your voice, not to identify it.

Not really. Giving the weather forecast or playing music seems pretty low risk to me.

Siri doesnt make me unlock the phone to give a weather report.

Right, but you understand why allowing access to unauthenticated voice is bad for security right?

But you understand why if I don't care about that, I should be able to run it, right?

you can, you can turn locking off.

But the point is, you are a power user, who has some understanding of the risk. You know that if your phone is stolen and it has any cards stored on them, they can be easily transferred to another phone and drained. Because your bank will send a confirmation code, and its still authorized, you will be held liable for that fraud.

THe "man in the street" does not know that, and needs some level of decent safe defaults to avoid such fraud.

I understand why you'd want to do it.

Oddly enough I also understand Apple telling you, good luck, find someones platform that will allow that, that's not us.

re: youtube music, I just tried it on my phone and it worked fine... maaaybe b/c you're not a youtube premium subscriber and google wants to shove ads into your sweet sweet eyeballs?

The one that kindof caught me off guard was asking "hey siri, how long will it take me to get home?" => "You'll need to unlock your iPhone for that, but I don't recommend doing that while driving..." => if you left your phone unattended at a bar and someone could figure out your home address w/o unlock.

...I'm kindof with you, maybe similar to AirTags and "Trusted Locations" there could be a middle ground of "don't worry about exposing rough geolocation or summary PII". At home, in your car (connected to a known CarPlay), kindof an in-between "Geo-Unlock"?

I pay for YouTube Music and I see really inconsistent behavior when asking Siri to play music. My five-year-old kid is really into an AI slop song that claims to be from the KPop Daemon Hunters 2 soundtrack, called Bloodline (can we talk about how YT Music in full of trashy rip-off songs?). He's been asking to listen to it every day this week in the car and prior to this morning, saying "listen to kpop daemon hunters bloodline" would work fine, playing it via YT Music. This morning, I tried every iteration of that request I could think of and I was never able to get it to play. Sometimes I'd get the response that I had to open YT Music to continue, and other times it would say it was playing, but it would never actually queue it up. This is a pretty regular issue I see. I'm not sure if the problem is with Siri or YT Music.

Its hard to come up with useful AI apps that aren't massive security or privacy risks. This is pretty obvious. For an agent to be really useful it needs to have access to [important stuff] but giving an AI access to [important stuff] is very risky. So you can get some janky thing like OpenClaw thats thrown together by one guy and has no boundaries and everyone on HN thinks is great, but its going to be very difficult for a big firm to make a product like that for mass consumption without it risking a massive disaster. You can see that Apple and Microsoft and Salesforce and everyone are all wrestling with this. Current LLMs are too easily hoodwinked.

I think you're being very generous. There's almost 0 chance they had this actually working consistently enough for general use in 2024. Security is also a reason, but there's no security to worry about if it doesn't really work yet anyway

The more interesting question I have is if such Prompt Injection Attacks can ever be actualy avoided, with how GenAI works.

Removing the risk for most jobs should be possible. Just build the same cages other apps already have. Also add a bit more transparency, so people know better what the machine is doing, maybe even with a mandatory user-acknowledge for potential problematic stuff, similar to how we have root-access-dialogues now. I mean, you don't really need access to all data, when you are just setting a clock, or playing music.

They could be if models were trained properly, with more carefully delineated prompts.

I'd be super interested in more information on this! Do you mean abandoning unsupervised learning completely?

Prompt Injection seems to me to be a fundamental problem in the sense that data and instructions are in the same stream and there's no clear/simple way to differentiate between the two at runtime.

Perhaps not, and it is indeed not unwise from Apple to stay away for a while given their ultra-focus on security.