It does for me, yes -- models seem to be pretty capable of adhering to the tool call format, which is really all that they 'need' in order to do a good job.
I'm still tweaking the prompts (and I've introduced a new, tool-call based edit format as a primary replacement to Aider's usual SEARCH/REPLACE, which is both easier and harder for LLMs to use - but it allows them to better express e.g. 'change the name of this function').
So... if you have any trouble with it, I would adjust the prompts (in `navigator_prompts.py` and `navigator_legacy_prompts.py` for non-tool-based editing). In particular when I adopted more 'terseness and proactively stop' prompting, weaker LLMs started stopping prematurely more often. It's helpful for powerful thinking models (like Sonnet and Gemini 2.5 Pro), but for smaller models I might need to provide an extra set of prompts that let them roam more.
So I understand how these prompts work for tooling, etc, but they tend to be specific to specific models. Is it possible you could actually supply say 10 prompts for the same tool and determine which one gets the correct output? It wouldn't be much harder than having some test cases and running each prompt through the user selected model to see which worked.
Otherwise you're at the mercy of whatever model the user has selected or downloaded or whatever. And whenever you need to tweak it to improve something.
This would be akin to how we used to calibrate stylus or touch screens.