Run server with ollama, use Continue extension configured for ollama

I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible.

But you can't just switch between installed models like in ollama, can you?