This actually seems like a simple example of memory request vs limit.
Request the amount of memory needed to be healthy, you can potentially set the limit higher to account for "reclaimable cache".
Another way to approach it if you find that there are too many limiting metrics to accurately model things: is you let the workers grab more segments until you determine that they are overloaded. Ideally for this to work though you have some idea that the node is approaching saturation. So for example: keep adding segments as long as the nth percentile response time is under some threshold.
The advantage of this approach is you don't necessarily have to know which resource (memory, filehandles, etc) is at capacity. You don't even necessarily have to have deep knowledge of linux memory management. You just have to be able to probe the system to determine if it's healthy.
I can even go backwards with a binary split mechanism. You sort of bring up a node that owns [A-H] (8 segments in this case). If that fails bring up 2 nodes that own [A-D],[E-H], if that fails, all the way down to one segment per node.
mmap'ed memory counts as that "reclaimable cache", which isn't always reclaimable (dirty or active pages are not immediately reclaimable). But Kubernetes memory accounting assumes that the page cache is always reclaimable. This creates a lot of surprises and unexpected OOMs. https://github.com/kubernetes/kubernetes/issues/43916