Seems like that could be a job local LLMs do fairly well soon; not a ton of reasoning, just a basic ability to understand functions and write fairly boilerplate code, but it involves a ton of tokens, especially if you have lots of verbose output from a test run. So doing it locally could end up being a huge cost savings as well.
this sounds like a good approach, i need to try it. I had good results with using context7 in specialized docs agent. I wasn't able how to limit MCP to a subagent, likely its not supported.
Seems like that could be a job local LLMs do fairly well soon; not a ton of reasoning, just a basic ability to understand functions and write fairly boilerplate code, but it involves a ton of tokens, especially if you have lots of verbose output from a test run. So doing it locally could end up being a huge cost savings as well.
Maybe but you still need to pass some context to the sub agent to describe the system under test.
this sounds like a good approach, i need to try it. I had good results with using context7 in specialized docs agent. I wasn't able how to limit MCP to a subagent, likely its not supported.