Gemma certainly was trained for tool calling, but the implementation in llama.cpp has been troubled because Gemma uses a different chat template format. The processor from the transformers library works fine though.

Oh I must've missed this.

The AI space moves so fast! I'll check it out again.

Don't forget to update the gguf you have too. The templates in them were updated recently too