Yes. Why even call it local-first when local isn't first? Not to mention, for some reason they decided to only support Ollama instead of giving you the option to connect to any OpenAI-compatible server, which would make this work with any other inference server such as llama.cpp and vLLM as well as Ollama. (and also most SaaS inference providers, including OpenRouter, so the custom integration would not be necessary either, https://schizo.cooking/schizo-takes/9.html)
Did you think local-first meant how a dropdown is sorted?
OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
The online documentation does not suggest that using a generic OpenAI-compatible server is an option, and it once again lists the non-local option first.
https://atomicapp.ai/getting-started/ai-providers/
> OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
Why is this necessary over just presenting the result of `/v1/models`?
You can say it's just the ordering of a dropdown, but to me it seems pretty clear that this thing is developed with the idea that you'll most likely use a SaaS provider.
It has supported local LLMs from the beginning, it was not something that was just tacked on. I don't know what else to tell you. Your assumptions are just wrong.