Somehow, in the AI world, "local-first" means a local harness talking to a remote model, almost never "local harness talking to local model". But then "open source model" apparently also means "you can download the weights if you agree to our license" and almost never "you can see, understand and iterate on what we did", so the definitions already drifted a lot between the two ecosystems.

Atomic supports any generic openAI compatible LLM provider, including ollama, LM studio, etc.

But local-first !== defaults to local inference, right?

I'm not sure I understand the question. Regardless of what provider you choose - be it cloud based or local - you have to provide setup information such as host, authentication, etc. So it "defaults" to nothing; you have to select something.

Maybe this will be a clearer question: What does "local-first" mean in the title that you typed in for this HN submission?

Local first means running Atomic with local models is not an afterthought. It’s a first class citizen that works just as seamlessly as running with a cloud provider - assuming you’ve done the work to provision the local models and their connections yourself.