There is some overhead per-core, you're right, but imo this reduction in usage is likely from how the kernel allocates available memory, which is being reduced as well. The kernel will keep read caches around longer with more memory, it'll prefer to compress memory instead of swap to disk if it has more, it'll purge/cleanup reclaimable memory less often with more memory, etc. It even scales its internal buffer sizes and vnode tables depending on total memory.
All good things imo, it dynamically makes the most of what is available, at the expense of making it harder to see a true baseline of hard min requirement to operate.
Fun things to check, `vm_stat`
$ vm_stat Mach Virtual Memory Statistics: (page size of 4096 bytes)
Pages free: 230295.
Pages active: 1206857.
Pages inactive: 1206361.
Pages speculative: 31863.
Pages throttled: 0.
Pages wired down: 470093.
Pages purgeable: 18894.
"Translation faults": 21635255.
Pages copy-on-write: 1590349.
Pages zero filled: 11093310.
Pages reactivated: 15580.
Pages purged: 50928.
File-backed pages: 689378.
Anonymous pages: 1755703.
Pages stored in compressor: 0.
Pages occupied by compressor: 0.
Decompressions: 0.
Compressions: 0.
Pageins: 832529.
Pageouts: 225.
Swapins: 0.
Swapouts: 0.
edit: no code fence markdown support or am I doing something wrong?