Seems like that could be a job local LLMs do fairly well soon; not a ton of reasoning, just a basic ability to understand functions and write fairly boilerplate code, but it involves a ton of tokens, especially if you have lots of verbose output from a test run. So doing it locally could end up being a huge cost savings as well.
Maybe but you still need to pass some context to the sub agent to describe the system under test.