Really cool.
But how to use it instead of Copilot in VSCode ?
Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now.
Run server with ollama, use Continue extension configured for ollama
I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible.
But you can't just switch between installed models like in ollama, can you?
llama-swap? https://www.nijho.lt/post/llama-nixos/
Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now.
Run server with ollama, use Continue extension configured for ollama
I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible.
But you can't just switch between installed models like in ollama, can you?
llama-swap? https://www.nijho.lt/post/llama-nixos/