I tried it on my mac, for coding, and I wasn't really impressed compared to Qwen.

I guess there are things it's better at?

Assuming you’re not copy/pasting for these tasks. What’s the stack required to use local models for coding? I’ve got a capable enough machine to produce tokens slowly, but don’t understand how to connect that to the likes of VSCode or a JetBrains ide.

You need some way to give it tools - the essential ones for coding are running bash commands, reading files and editing files.

You need the LLM to be able to respond with tool use requests, and then your local harness to process them and respond to it. You can read how tool calling works with eg Claude API to get the idea: https://platform.claude.com/docs/en/agents-and-tools/tool-us...

Under the hood something like Claude Code is calling the API with tools registered, and then when it gets a tool use request it runs that locally, and then responds to the API with the result. That’s the loop that enables coding.

Integrating with an IDE specifically is really just a UI feature, rather than the core functionality.

You're comparing apples to oranges there. Qwen 3.5 is a much larger model at 397B parameters vs. Gemma's 31B. Gemma will be better at answering simple questions and doing basic automation, and codegen won't be it's strong suit.

Qwen3.5 comes in various sizes (including 27B), and judging by the posts on HN, /LocalLlama etc., it seems to be better at logic/reasoning/coding/tool calling compared to Gemma 4, while Gemma 4 is better at creative writing and world knowledge (basically nothing changed from the Qwen3 vs. Gemma3 era)

Does this also apply to gemma's 26B-A4B vs say Qwens 35B-A3B?

I'm not sure if I can make the 35B-A3B work with my 32GB machine

It should be easy with a Q4 (quantization to 4 bits per weight) and a smallish context.

You won't have much RAM left over though :-/.

At Q4, ~20 GiB

https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF

For llama-server (and possibly other similar applications) you can specify the number of GPU layers (e.g. `--n-gpu-layers`). By default this is set to run the entire model in VRAM, but you can set it to something like 64 or 32 to get it to use less VRAM. This trades speed as it will need to swap layers in and out of VRAM as it runs, but allows you to run a larger model, larger context, or additional models.

Gemma 4 31B is still not impressive at coding compare to even Qwen 3.5 27B. It's just not its strong suit.

So far gemma 4 seems excellent at role playing, document analysis, and decent at making agentic decisions.

This has been my experience as well, Qwen via Ollama locally has been very very impressive.