Somehow, in the AI world, "local-first" means a local harness talking to a remote model, almost never "local harness talking to local model". But then "open source model" apparently also means "you can download the weights if you agree to our license" and almost never "you can see, understand and iterate on what we did", so the definitions already drifted a lot between the two ecosystems.
I'm not sure I understand the question. Regardless of what provider you choose - be it cloud based or local - you have to provide setup information such as host, authentication, etc. So it "defaults" to nothing; you have to select something.
Local first means running Atomic with local models is not an afterthought. It’s a first class citizen that works just as seamlessly as running with a cloud provider - assuming you’ve done the work to provision the local models and their connections yourself.
I'm not sure what the dunk is supposed to be here .. Atomic supports the exact same feature set with local models as it does for OpenRouter. Is your gripe just that Openrouter is the first option in the dropdown?
Yes. Why even call it local-first when local isn't first? Not to mention, for some reason they decided to only support Ollama instead of giving you the option to connect to any OpenAI-compatible server, which would make this work with any other inference server such as llama.cpp and vLLM as well as Ollama. (and also most SaaS inference providers, including OpenRouter, so the custom integration would not be necessary either, https://schizo.cooking/schizo-takes/9.html)
Did you think local-first meant how a dropdown is sorted?
OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
The online documentation does not suggest that using a generic OpenAI-compatible server is an option, and it once again lists the non-local option first.
> OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
Why is this necessary over just presenting the result of `/v1/models`?
You can say it's just the ordering of a dropdown, but to me it seems pretty clear that this thing is developed with the idea that you'll most likely use a SaaS provider.
It has supported local LLMs from the beginning, it was not something that was just tacked on. I don't know what else to tell you. Your assumptions are just wrong.
Somehow, in the AI world, "local-first" means a local harness talking to a remote model, almost never "local harness talking to local model". But then "open source model" apparently also means "you can download the weights if you agree to our license" and almost never "you can see, understand and iterate on what we did", so the definitions already drifted a lot between the two ecosystems.
Atomic supports any generic openAI compatible LLM provider, including ollama, LM studio, etc.
But local-first !== defaults to local inference, right?
I'm not sure I understand the question. Regardless of what provider you choose - be it cloud based or local - you have to provide setup information such as host, authentication, etc. So it "defaults" to nothing; you have to select something.
Maybe this will be a clearer question: What does "local-first" mean in the title that you typed in for this HN submission?
Local first means running Atomic with local models is not an afterthought. It’s a first class citizen that works just as seamlessly as running with a cloud provider - assuming you’ve done the work to provision the local models and their connections yourself.
I'm not sure what the dunk is supposed to be here .. Atomic supports the exact same feature set with local models as it does for OpenRouter. Is your gripe just that Openrouter is the first option in the dropdown?
Yes. Why even call it local-first when local isn't first? Not to mention, for some reason they decided to only support Ollama instead of giving you the option to connect to any OpenAI-compatible server, which would make this work with any other inference server such as llama.cpp and vLLM as well as Ollama. (and also most SaaS inference providers, including OpenRouter, so the custom integration would not be necessary either, https://schizo.cooking/schizo-takes/9.html)
Did you think local-first meant how a dropdown is sorted?
OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
The online documentation does not suggest that using a generic OpenAI-compatible server is an option, and it once again lists the non-local option first.
https://atomicapp.ai/getting-started/ai-providers/
> OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
Why is this necessary over just presenting the result of `/v1/models`?
You can say it's just the ordering of a dropdown, but to me it seems pretty clear that this thing is developed with the idea that you'll most likely use a SaaS provider.
It has supported local LLMs from the beginning, it was not something that was just tacked on. I don't know what else to tell you. Your assumptions are just wrong.
Yes, hah.
"Local-first, your data never leaves the computer! Except once to go to the biggest information hoarders on the Internet."
Atomic supports any generic openAI compatible LLM provider, including ollama, LM studio, etc.