No mention of Pegasus and other software of such sort. Can latest iOS still be infected?
There is no point creating such document if elephant in the room is not addressed.
No mention of Pegasus and other software of such sort. Can latest iOS still be infected?
There is no point creating such document if elephant in the room is not addressed.
Apple's head of SEAR (Security Engineering & Architecture) just gave the keynote at HEXACON, a conference attended by the companies who make Pegasus such as NSO Group.
That doesn't seem like avoiding the elephant in the room to me. It seems like very much acknowledging the issue and speaking on it head-on.
https://www.youtube.com/watch?v=Du8BbJg2Pj4
Pegasus isn't magic. It exploits security vulnerabilities just like everything else. Mitigating and fixing those vulnerabilities is a major part of this document.
Why? The obvious conclusion is that Apple is doing everything in its power to make the answer “no.”
You might as well enumerate all the viruses ever made on Windows, point to them, and then ask why Microsoft isn’t proving they’ve shut them all down yet in their documents.
That analogy misses the asymmetry in claims and power.
Microsoft does not sell Windows as a sealed, uncompromisable appliance. It assumes a hostile environment, acknowledges malware exists, and provides users and third parties with inspection, detection, and remediation tools. Compromise is part of the model.
Apple’s model is the opposite. iOS is explicitly marketed as secure because it forbids inspection, sideloading, and user control. The promise is not “we reduce risk”, it’s “this class of risk is structurally eliminated”. That makes omissions meaningful.
So when a document titled Apple Platform Security avoids acknowledging Pegasus-class attacks at all, it isn’t comparable to Microsoft not listing every Windows virus. These are not hypothetical threats. They are documented, deployed, and explicitly designed to bypass the very mechanisms Apple presents as definitive.
If Apple believes this class of attack is no longer viable, that’s worth stating. If it remains viable, that also matters, because users have no independent way to assess compromise. A vague notification that Apple “suspects” something, with no tooling or verification path, is not equivalent to a transparent security model.
The issue is not that Apple failed to enumerate exploits. It’s that the platform’s credibility rests on an absolute security narrative, while quietly excluding the one threat model that contradicts it. In other words Apple's model is good old security by obscurity.
I am not sure if you missed my earlier comment, but it's directly applicable to this point you've repeatedly made:
>If Apple believes this class of attack is no longer viable, that’s worth stating.
To say it more directly this time: they do explicitly speak to this class of attack in the keynote that I linked you to in my previous comment. It's a very interesting talk and I encourage you to watch it:
https://www.youtube.com/watch?v=Du8BbJg2Pj4
On some random YouTube video that is mostly consisting of waffle and meaningless information like "95% of issues are architecturally prevented by SPTM". It's a quite neat and round number. Come on dude.
[flagged]
It’s not “a weakness.” It’s many weaknesses chained together to make an exploit. Apple patches these as they are found. NSO then tries to find new ones to make new exploits.
Apple lists the security fixes in every update they release, so if you want to know what they’ve fixed, just read those. Known weaknesses get fixed. Software like Pegasus operates either by using known vulnerabilities on unpatched OSes, or using secret ones on up to date OSes. When those secret ones get discovered, they’re fixed.
don't worry, they set the allow_pegasus boolean to false
Apple did create a boolean for that. They call it lockdown mode.
> Lockdown Mode is an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats. Most people are never targeted by attacks of this nature. When Lockdown Mode is enabled, your device won’t function like it typically does. To reduce the attack surface that potentially could be exploited by highly targeted mercenary spyware, certain apps, websites, and features are strictly limited for security and some experiences might not be available at all.
If Pegasus can break the iOS security model, there’s no reason to think it politely respects Lockdown Mode. It’s basically an admission the model failed, with features turned off so users feel like they’re doing something about it.
Lockdown mode works by reducing the surface area of possible exploits. I don't think there's any failures here. Apple puts a lot of effort into resolving web-based exploits, but they can also prevent entire classes of exploits by just blocking you from opening any URL in iMessage. It's safer, but most users wouldn't accept that trade-off.
Claiming reduced attack surface without showing which exploit classes are actually eliminated is faith, not security.
And Lockdown Mode is usually enabled _after_ user suspects targeting.
If you did RTFA for this story, you’ll see on page 67 what I pasted with a link to the support article describing to end users exactly what’s blocked. It does greatly reduce the attack surface.