Interesting approach. The mmap streaming idea is clever, but I'd love to see real-world benchmarks beyond TinyLlama — especially for the 140B claim. Running that on a Mac Mini with 16GB would be the real proof point.

For context, I run a Mac Mini M4 as a homelab server and the memory pressure from even 7B models is noticeable. Curious how this handles sustained inference without thermal throttling.