Please mention and support llama.cpp directly instead of ollama.