Kudos on the launch. Love the local-ai approach.
Regarding open models: what is the go-to way for me to make Surf run with qwen3-vl? Ollama?
As far as I understand any endpoint that supports the completions API will work?
https://github.com/deta/surf/blob/main/docs/AI_MODELS.md
If I attach image context will it be provided to qwen3-vl? Or does this only work with the "main" models like OpenAI, Anthropic, Gemini and so on?
Thank you.
Yes, we support any endpoint that supports the completions API. And yes, Ollama might be the easiest to setup. The images should also work with qwen3-vl.
But if you run into any issues, please feel free to submit a bug report https://github.com/deta/surf/issues
Edit: fixed github issues link