Gemma 4 26B really is an outlier in its weight class.

In our little known, difficult to game benchmarks, it scored about as well as GPT 5.2 and Gemini 3 Pro Preview on one-shot coding problems. It had me re-reviewing our entire benchmarking methodology.

But it struggled in the other two sections of our benchmark: agentic coding and non-coding decision making. Tool use, iterative refinement, managing large contexts, and reasoning outside of coding brought the scores back down to reality. It actually performed worse when it had to use tools and a custom harness to write code for an eval vs getting the chance to one-shot it. No doubt it's been overfit on common harnesses and agentic benchmarks. But the main problem is likely scaling context on small models.

Still, incredible model, and incredible speed on an M-series Macbook. Benchmarks at https://gertlabs.com

I have very mixed feelings about that model. I want to like it. It's very fast and seems to be fit for many uses. I strongly dislike its "personality", but it responds well to system prompts.

Unfortunately, my experience with it as a coding assistant is very poor. It doesn't understand libraries it seems to know about, it doesn't see root causes of problems I want it to solve, and it refuses to use MCP tools even when asked. It has a very strong fixation on the concept of time. Anything past January 2025, which I think is its knowledge cutoff, the model will label as "science fiction" or "their fantasy" and role play from there.

Thats funny, it failed my usual ‘hello world’ benchmark for LLM’s:

“Write a single file web page that implements a 1 dimensional bin fitting calculator using the best fit decreasing algorithm. Allow the user to input bin size, item size, and item quantity.”

Qwen3.5, Nematron, Step 3.5, gpt-oss all passed first go..

Overall it's a very good open weights model! Notably I thought it makes more dumb coding mistakes than GPT-OSS on my M5, but it's fairly close overall.

For me the vision/OCR is much better than other models in weights class.

Gemma 31B scoring below 26B-A4B?

In one shot coding, surprisingly, yes, by a decent amount. And it isn't a sample size issue. In agentic, no: https://gertlabs.com/?agentic=agentic

My early takeaway is that Gemma 26B-A4B is the best tuned out of the bunch, but being small and with few active params, it's severely constrained by context (large inputs and tasks with large required outputs tank Gemma 26B's performance). We're working on a clean visualization for this; the data is there.

It's not uncommon for a sub-release of a model to show improvements across the board on its model card, but actually have mixed real performance compared to its predecessor (sometimes even being worse on average).

In early tests the performance of gemma-4-31B was affected by tokenizer bugs in many of the existing backends, like llama.cpp, which were later corrected by their maintainers.

Moreover, tool invocation had problems that were later corrected by Google in an updated chat template.

So any early benchmarks that have shown the dense model as inferior to the MoE model are likely to be flawed and they must be repeated after updating both the inference backend and the model.

All benchmarks that I have seen after the bugs were fixed have shown the dense model as clearly superior in quality, even if much slower.

We add samples every week, so I'm curious if the numbers will move.

They did a similar re-release during the Gemini 3.1 Pro Preview rollout, and released a custom-tools version with its own slug, which performs MUCH better on custom harnesses (mostly because the original release could not figure out tool call formatting at all).