Are you running it locally with llama.cpp? If so, is it working without any tweaking of the chat template? The tool calls fail for me when using the default chat template, however it seems to work a whole lot better with this: https://huggingface.co/Qwen/Qwen3.5-35B-A3B/discussions/9#69...
Have you tried the '--jinja' flag in llama-server?
Yes, it fails too. I’m using the unsloth q4_km quant. Similarly fails with devstral2 small too, fixed that by using a similar template i found for it. Maybe it’s the quants that are broken, need to redownload I guess.