I don't know if this is true.
For example, you can spend a few hours writing a really good set of initial tests that cover 10% of your codebase, and another few hours with an AGENTS.md that gives the LLM enough context about the rest of the codebase. But after that, there's a free* lunch because the agent can write all the other tests for you using that initial set and the context.
This also works with "here's how I created the Slack API integration, please create the Teams integration now" because it has enough to learn from, so that's free* too. This kind of pattern recognition means that prompting is O(1) but the model can do O(n) from that (I know, terrible analogy).
*Also literally becomes free as the cost of tokens approaches zero
A neat part of this is it mimics how people get onboarded onto codebases. People usually aren't figuring out how to write tests from scratch; they look at the current best practices for similar functionality in the codebase and start there. And then as they continue to work there they try to influence new best practices.