It wouldn’t be useful with your setup, probably 3-4 token per second.

Yep, maybe I can open a feature request if it makes sense technically.

Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.