Ah, I see. My foray into ML in recent times mostly concentrated around theoretical models (transformers obviously, but also Mamba, SSM's, etc.) & kernel generation frameworks (such as ThunderKittens and Triton). Not really around the system architecture level.
I've implemented KV caching in C++ and seen it implemented in Python, I see your point.
No large scale training & inference either, that's cool, if the model can't even fit onto a single GPU. I can see how memory communication can become a significant issue, since you'd have to manage that through python if you're managing python kernels. (Though you technically could just throw all the responsibility down to the lower levels yet again... not a good idea & polluting responsibilities though)