I always found that Jit and GC are a marriage destined to come together, but never found one another entirely. Jit marks the hotloop in code- and thus can tell the GC in detail what a generation really is and how long a generation lifetime really lasts.

It can reveal secret cull conditions for long generational objects. If that side-branch is hit in the hot-loop, all longterm objects of that generation, are going to get culled, in a single stroke… so bundle them and keep them bundled. And now they started using it, to at least detect objects that do not escape lambdas. So its all stack, no more GC involved at all. Its almost at the static allocation thing we do for games. If the model proofs that every hotloop 5 objects are allocated and life until a external event occurs- static allocation and its done.

Great start. But you could do so much more than that with this. If you write a custom JIT whose goal is not just to detect and bytecompile hotloops, but to build a complete multi-lifetime model of object generation.

Any interpreter could theoretically do those "marking" things, also JIT's do far more than just "bytecompile" hot loops, _all_ cooperative modern GC's are enabled by JIT semantics for things like read and/or write barriers (this helps a GC keep track of objects that keep getting "touched" whilst the GC can work in parallel).

Outside of the mentioned, things like detecting finegrained lifetimes is very very hard and the mentioned escape analysis is an optimization that needs to be capped to avoid the halting problem. (1)

A fairly deep covererage of GC behaviours can found in Bacon's "Unified Theory of Garbage Collection" where the author theoretically connect previous works on tracing collectors and reference-counting systems and show that the optimized variations often existing in a design-space between them. (2)

1: https://en.wikipedia.org/wiki/Halting_problem

2: https://web.eecs.umich.edu/~weimerw/2008-415/reading/bacon-g...

[dead]