Security through obscurity is not a great idea. This is what Apple's current approach is. For instance if your iPhone is infected with malware, there is no anti-virus software that can find it, because Apple doesn't let software to have such deep access that is needed for scanning.
It's not security by obscurity. It's security by minimizing the attack service by being extremely picky about what you sign. When it is paramount that the code you sign is correct you can't go signing a ton of different projects from people who may not even care about security as much as you do.
>For instance if your iPhone is infected with malware
Then restarting it will remove it. So far Apple has had a perfect record with this unlike Android.
> Apple doesn't let software to have such deep access that is needed for scanning
Normalizing "security" software running in the background to "scan" things has proven a social and technical disaster. Users think it's normal to have such activity (and receive random "virus alerts"), leading to over two decades of social engineering scams, fraud, and malware-delivery. On top of that, "security" software has a habit of creating its own security holes and problems. Look at game anti-cheats (one was just on the front page the other day), the CrowdStrike incident, etc.
OS vendors should simply deliver a secure OS. That isn't easy, but it's still easier and more reliable than shipping third-party "security" software after the fact.
The issue isn’t “normalising pop-up virus scanners” or letting random vendors hook the kernel. It’s verifiability. On Apple platforms, the security model is explicitly “trust us bro”. You cannot independently inspect the system at the level required to detect certain classes of compromise, because Apple forbids it.
A platform where compromise is, by design, undetectable to the owner is not “more secure”, it’s merely less observable. That’s security through opacity, not security through design.
Yes, third-party security software has a bad history. So does third-party everything. That doesn’t magically make a closed system safer. It just moves all trust to a single vendor, removes independent validation, and ensures that when something slips through, only the platform owner gets to decide whether it exists.
“The OS vendor should deliver a secure OS” is an aspiration, not an argument. No OS is bug-free. Defence in depth means independent mechanisms of inspection, not just a promise that the walls are high enough.
Apple’s model works well for reducing mass-market malware and user error. It does not work for high-assurance trust, because you cannot verify state. If you can’t audit, you can’t prove clean. You can only assume.
That may be a perfectly acceptable trade-off. But let’s not pretend it’s the same thing as stronger security. It’s a different philosophy, and it comes with real blind spots - which to some people make Apple devices a non-starter.
Security through obscurity is not a great idea. This is what Apple's current approach is. For instance if your iPhone is infected with malware, there is no anti-virus software that can find it, because Apple doesn't let software to have such deep access that is needed for scanning.
It's not security by obscurity. It's security by minimizing the attack service by being extremely picky about what you sign. When it is paramount that the code you sign is correct you can't go signing a ton of different projects from people who may not even care about security as much as you do.
>For instance if your iPhone is infected with malware
Then restarting it will remove it. So far Apple has had a perfect record with this unlike Android.
> Then restarting it will remove it. So far Apple has had a perfect record with this unlike Android.
Not things like Pegasus.
It does not minimise attack surface, but minimise ways _you_ can ensure there is nothing on the phone that shouldn't be there.
> Apple doesn't let software to have such deep access that is needed for scanning
Normalizing "security" software running in the background to "scan" things has proven a social and technical disaster. Users think it's normal to have such activity (and receive random "virus alerts"), leading to over two decades of social engineering scams, fraud, and malware-delivery. On top of that, "security" software has a habit of creating its own security holes and problems. Look at game anti-cheats (one was just on the front page the other day), the CrowdStrike incident, etc.
OS vendors should simply deliver a secure OS. That isn't easy, but it's still easier and more reliable than shipping third-party "security" software after the fact.
The issue isn’t “normalising pop-up virus scanners” or letting random vendors hook the kernel. It’s verifiability. On Apple platforms, the security model is explicitly “trust us bro”. You cannot independently inspect the system at the level required to detect certain classes of compromise, because Apple forbids it.
A platform where compromise is, by design, undetectable to the owner is not “more secure”, it’s merely less observable. That’s security through opacity, not security through design.
Yes, third-party security software has a bad history. So does third-party everything. That doesn’t magically make a closed system safer. It just moves all trust to a single vendor, removes independent validation, and ensures that when something slips through, only the platform owner gets to decide whether it exists.
“The OS vendor should deliver a secure OS” is an aspiration, not an argument. No OS is bug-free. Defence in depth means independent mechanisms of inspection, not just a promise that the walls are high enough.
Apple’s model works well for reducing mass-market malware and user error. It does not work for high-assurance trust, because you cannot verify state. If you can’t audit, you can’t prove clean. You can only assume.
That may be a perfectly acceptable trade-off. But let’s not pretend it’s the same thing as stronger security. It’s a different philosophy, and it comes with real blind spots - which to some people make Apple devices a non-starter.