That's why i suggested using llama.cpp in my other comment.