For new folks, you can get a local code agent running on your Mac like this:
1. $ npm install -g @openai/codex
2. $ brew install ollama; ollama serve
3. $ ollama pull gpt-oss:20b
4. $ codex --oss -m gpt-oss:20b
This runs locally without Internet. Idk if there’s telemetry for codex, but you should be able to turn that off if so.
You need an M1 Mac or better with at least 24GB of GPU memory. The model is pretty big, about 16GB of disk space in ~/.ollama
Be careful - the 120b model is 1.5× better than this 20b variant, but takes 5× higher requirements.
I've been really impressed by OpenCode [0]. The limitations of all the frontier TUI is removed and it is feature complete and performant compared to Codex or Claude Code.
[0] https://opencode.ai/
> OpenCode will be available on desktop soon
Anyone happen to know what that means exactly? The install instructions at the top seems to indicate it already is available on desktop?
It's a terminal only (TUI) tool today. They're releasing a graphical (GUI) version in the future.
> It's a terminal only (TUI) tool today.
But to use that TUI you need a desktop, or at least a laptop I guess, but that distinction doesn't make sense. Are they referring to the GUI being the "Desktop Version"? Never heard it put that way before if so.
have you been able to build or reiterate anything of value using just 20b to vibe code?
As much as I've been using llms via api all day every day, being able to run it locally on my mba and talk to my laptop still feels like magic
LM Studio is even easier, and things like JetBrains IDEs will sync to LM Studio, same with Zed.