Most games have to do this for performance reasons at some point and there are plenty of variants to choose from. Rust has libraries for some of them, but in c rolling it yourself is the idiom. One I used in c++ and worked well as a retrofit was to overload new to grab the smallest chunk that would fit the allocation from banks of them. Profiling under load let the sizes of the banks be tuned for efficiency. Nothing had to know it wasn't a real heap allocation, but it was way faster and with zero possibility of memory fragmentation.

Most pre-2010 games had to. As a prior gamedev after that period I can confidently say that it is a relic of the past in most cases now. (Not like that I don't care, but I don't have to be that strict about allocations.)

Because why?

Virtual memory gets rid of a lot of fragmentation issues.

Yeah. Fragmentation was a niche concern of that embedded use case. It had an mmu, just wasn't used by the rtos. I am surprised that allocations aren't a major hitter anymore. I still have to minimize/eliminate them in linux signal processing code to stay realtime.

Probably because hardwares became powerful enough that you can make a performant game without thinking much about allocations.