Both humans and coding agents have their strengths and weaknesses, but I've been appreciating help from coding agents, especially with languages or frameworks where I have less expertise, and the agent has more "knowledge", either in its weights or in its ability to more quickly ingest documentation.
One weakness of coding agents is that sometimes all it sees are the codes, and not the outputs. That's why I've been working on agent instructions/tools/MCP servers that empower it with all the same access that I have. For example, this is a custom chat mode for GitHub Copilot in VS Code: https://raw.githubusercontent.com/Azure-Samples/azure-search...
I give it access to run code, run tests and see the output, run the local server and see the output, and use the Playwright MCP tools on that local server. That gives the agent almost every ability that I have - the only tool that it lacks is the breakpoint debugger, as that is not yet exposed to Copilot. I'm hoping it will be in the future, as it would be very interesting to see how an agent would step through and inspect variables.
I've had a lot more success when I actively customize the agent's environment, and then I can collaborate more easily with it.