Recent ["self-propagating NPM malware"](https://news.ycombinator.com/item?id=45260741) reminds us that the predominant security model is basically whack-a-mole: you gotta trust _every_ piece of software you run (including all the libraries, plugins, etc), unless you explicitly sandbox it.

Capability-based security might offer an alternative: software should not have access to things when it's not explicitly provided with access. I.e. "classic" desktop security is kind of a blacklist model (everything is possible unless explicitly restricted e.g. via sandbox) while capbility-based security is like a whitelist.

On a programming language level it's usually known as object-capability model, and there's a number of programming languages which implement it: https://en.m.wikipedia.org/wiki/Object-capability_model

The question: why isn't it more popular? It doesn't even seem to be widely known, let alone used. (Aside from isolated examples.)

Is there any chance it would be widely adopted?

I guess one objection is that people don't want to manually configure security. But perhaps it can be integrated into normal UX if we really think about it: e.g. if you select a file using a system-provided file picker it would automatically grant access to that file, as access is explicitly authorized.

We live in a capabilities based world. Most of my outlets give 120 volts at 15 amps and deny access to the whole power grid.

When I want to pay for a purchase I have a number of possible economic capabilities from 1 cent to 100 dollars.

On our GUI oriented systems, file selections could automatically handle capabilities and the user might not even notice a difference.

#1 problem is the IT mindsets' unwillingness to adopt a default-deny philosophy.

Default-Accept philosophy make it easier for millions of holes to open up ag first and you spend entire IT budget locking down things you don't need but not the ones you don't see that needs closing.

Default-deny is one time IT expenditure. And you start poking holes to let things thru. If that hole is dirty, you plainly see that dirty hole and plug it.

All that also equally applies to CPU designers.

"Default deny" is the Windows model of clicking "yes" for the incessant permissions dialog box.

Or the Linux model of prefixing every command with a "sudo".

It doesn't work.

Well that happens when it's bolted onto something not designed for fine-grained access.

It's much different when UX is built around it. E.g. for a web browser _has_ to treat web pages as untrusted. So instead of giving web page access to file system (that would be equivalent of `sudo`) you selected individual files/directories using a trust browser UI, and they are made available through particular APIs which are basically equivalent to ocaps. So if don't need to support POSIX APIs ("I WANT TO JUST fopen!!!") it's much easier.

> It's much different when UX is built around it. E.g. for a web browser

Why don't you run th e web browser as init then ?

[deleted]

I don't know much (if anything) about it, but it can be turned into an interesting thought experiment.

Let’s use Apple as an example, as they tend to do major transitions on a regular basis.

So, let’s say that the top tier already approved the new security mode(l).

Now, how to do it?

My understanding is that most if not all APIs would have to be changed or replaced. So that's pretty much a new OS, that needs new apps (if the APIs change, you cannot simply recompile the apps).

Now, if you expose the existing APIs to the new OS/apps, then what's the gain?

And if you don't expose them, then you basically need a VM. I mean, I don’t know Darwin syscalls, but I suspect you might need new syscalls as well.

And so you end up with a brand new OS that lives in a VM and has no apps. So it's likely order(s?) of magnitude more profitable to just harden the existing platforms.

The more fine-grained you make a capability system, the more you have an explosion of the number of permissions required by an application, and the chance that some combination of permissions grants more access than intended.

It also requires rewriting all your apps.

It also might require hardware support to not be significantly slower.

"Just sandbox each app" has much fewer barriers to entry, so people have been doing that instead.

And systems like Android have been working with discrete permissions / capabilities, because they were able to start from scratch in a lot of ways, and didn't need to be compatible with 50 years of applications.

I presume this is because of compatibility reasons.

Back in 70s and 80s computers didn't contain valuable information to care about and there was no Internet to transmit such information. So, adding some sort of security elements in operating systems had no sense. In these years modern operating system were first developed - Unix, Dos, Windows. Since then many architectural decisions of these operating systems weren't revised in order to avoid breaking backward-compatibility. Even if we need to break it to achieve better security, no one is ready to make such sacrifice.

There are projects of operating systems with focus on security, which are not just Unix-like systems or Windows clones. But they can't replace existing operating systems because of network effects (it's unpractical to use a system nobody else uses).

Have a look at microsoft MSIX.

This is something that needs to be baked into the operating system, which is not supported by major OSs today. The next best thing is to rely on a "secure environment" where applications can be installed and run, similar to phone apps or browser extensions. This environment would probably use application manifests to list entitlements (aka capabilities), like disk access, network access, etc. But until then, we're stuck with the ambient security model.