Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.

The difference is that you'll fail allocations, where there's a reasonable interface for errors, rather than failing at demand paging when writing to previously unused pages where there's not a good interface.

Of course, there are many software patterns where excessive allocations are made without any intent of touching most of the pages; that's fine with overcommit, but it will lead to allocation failures when you disable overcommit.

Disabling overcommit does make fork in a large process tricky; I don't think the rant about redis in the article is totally on target; fork to persist is a pretty good solution, copy on write is a reasonable cost to pay while dumping the data to disk and then it returns to normal when the dump is done. But without overcommit, it doubles the memory commitment while the dump is running, and that's likely to cause issues if redis is large relative to memory and that's worth checking for and warning about. The linked jemalloc issue seems like it could be problematic too, but I only skimmed; seems like that's worth warning about as well.

For the fork path, it might be nice if you could request overcommit in certain circumstances... fork but only commit X% rather than the whole memory space.

You're correct it doesn't prefault the mappings, but that's irrelevant: it accounts them as allocated, and a later allocation which goes over the limit will immediately fail.

Remember, the limit is artificial and defined by the user with overcommit=2, by overcommit_ratio and user_reserve_kbytes. Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).

> Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).

The RAM is not unusable, it will be used. Some portion of ram may be unallocatable, but that doesn't mean it's wasted.

There's a tradeoff. With overcommit disabled, you will get allocation failure rather than OOM killer. But you'll likely get allocation failures at memory pressure below that needed to trigger the OOM killer. And if you're running a wide variety of software, you'll run into problems because overcommit is the mainstream default for Linux, so many things are only widely tested with it enabled.

> The RAM is not unusable, it will be used. Some portion of ram may be unallocatable

I think that's a meaningless distinction: if userspace can't allocate it, it is functionally wasted.

I completely agree with your second paragraph, but again, some portion of RAM obtainable with overcommit=0 will be unobtainable with overcommit=2.

Maybe a better way to say it is that a system with overcommit=2 will fail at a lower memory pressure than one with overcommit=0. Additional RAM would have to be added to the former system to successfully run the same workload. That RAM is waste.

it's absolutely wasted if apps on server don't use disk (disk cache is pretty much only thing that can use that reserved memory).

You can have simple web server that took less than 100MB of RAM take gigabytes, just because it spawned few COW-ed threads

If the overcommit ratio is 1, there is no portion rendered unusable? This seems to contradict your "necessarily" wastes RAM claim?

Read the comment again, that wasn't the only one I mentioned.

Please point out what you're talking about, because the comment is short and I read it fully multiple times now.

> Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.

Doesn't really change the point. The RAM might not be completely wasted, but given that near every app will over-allocate and just use the pooled memory, you will waste memory that could otherwise be used to run more stuff.

And it can be quite significant, like it's pretty common for server apps to start a big process and then have COWed thread per connection in a pool, so your apache2 eating maybe 60MB per thread in pool is now in gigabytes range at very small pool sizes.

Blog is essentially call to "let's make apps like we did in the DOS era" which is ridiculus