Would be interesting if this was a coding focused model optimized for Mac inference. Would be a great way to undercut Anthropic.

Pretty much give away Sonnet level coding model and have it work with GPT-5 for harder tasks / planning.

Out of curiosity, have you tried running Qwen3 Coder 30B locally? https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-...

Not the GP, but I haven't, how is it? I use Claude Code with Sonnet, does Qwen3 compare?

I'm also using Claude Code and am very familiar with it, but haven't had a chance to try Qwen3 Coder 30B A3B for any real-world development. That said, it did well with my "kick the tires" tests, and some reports show that it's comparable to Sonnet (at least before adding the various levels of 'think' directives):

https://llm-stats.com/models/compare/claude-3-7-sonnet-20250...