Yep, maybe I can open a feature request if it makes sense technically.
Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.