The issue isn’t “normalising pop-up virus scanners” or letting random vendors hook the kernel. It’s verifiability. On Apple platforms, the security model is explicitly “trust us bro”. You cannot independently inspect the system at the level required to detect certain classes of compromise, because Apple forbids it.

A platform where compromise is, by design, undetectable to the owner is not “more secure”, it’s merely less observable. That’s security through opacity, not security through design.

Yes, third-party security software has a bad history. So does third-party everything. That doesn’t magically make a closed system safer. It just moves all trust to a single vendor, removes independent validation, and ensures that when something slips through, only the platform owner gets to decide whether it exists.

“The OS vendor should deliver a secure OS” is an aspiration, not an argument. No OS is bug-free. Defence in depth means independent mechanisms of inspection, not just a promise that the walls are high enough.

Apple’s model works well for reducing mass-market malware and user error. It does not work for high-assurance trust, because you cannot verify state. If you can’t audit, you can’t prove clean. You can only assume.

That may be a perfectly acceptable trade-off. But let’s not pretend it’s the same thing as stronger security. It’s a different philosophy, and it comes with real blind spots - which to some people make Apple devices a non-starter.