There is no free lunch. The amount of prompt writing to give the LLM enough context about your codebase etc is comparable to writing the tests yourself.

Code assistance tools might speed up your workflow by maybe 50% or even 100%, but it's not the geometric scaling that is commonly touted as the benefits of autonomous agentic AI.

And this is not a model capability issue that goes away with newer generations. But it's a human input problem.

I don't know if this is true.

For example, you can spend a few hours writing a really good set of initial tests that cover 10% of your codebase, and another few hours with an AGENTS.md that gives the LLM enough context about the rest of the codebase. But after that, there's a free* lunch because the agent can write all the other tests for you using that initial set and the context.

This also works with "here's how I created the Slack API integration, please create the Teams integration now" because it has enough to learn from, so that's free* too. This kind of pattern recognition means that prompting is O(1) but the model can do O(n) from that (I know, terrible analogy).

*Also literally becomes free as the cost of tokens approaches zero

A neat part of this is it mimics how people get onboarded onto codebases. People usually aren't figuring out how to write tests from scratch; they look at the current best practices for similar functionality in the codebase and start there. And then as they continue to work there they try to influence new best practices.

It depends on the problem domain.

I recently had a bunch of Claude credits so got it to write a language implementation for me. It probably took 4 hours of my time, but judging by other implementations online I'd say the average implementation time is hundreds of hours.

The fact that the model knew the language and there are existing tests I could use is a radical difference.