In my experience the agent regularly breaks some current features while adding a new one - much more often than a human would. Agents too often forget about the last feature when adding the next and so will break things. Thus I find Agent generated tests important as they stop the agent from making a lot of future mistakes.
It is definitely not foolproof but IMHO, to some extent, it is easier to describe what you expect to see than to implement it so I don't find it unreasonable to think it might provide some advantages in terms of correctness.
In my experience, this tends to be more related to instrumentation / architecture than a lack of ability to describe correct results. TDD is often suggested as a solution.
Given the issues with AWS with Kiro and Github, We already have just a few high-profile examples of what happens when AI is used at scale and even when you let it generate tests which is something you should absolutely not do.
Don't "let it" generate tests. Be intentional. Define them in a way that's slightly oblique to how the production code approaches the problem, so the seams don't match. Heck, that's why it's good to write them before even thinking about the prod side.
In my experience the agent regularly breaks some current features while adding a new one - much more often than a human would. Agents too often forget about the last feature when adding the next and so will break things. Thus I find Agent generated tests important as they stop the agent from making a lot of future mistakes.
It is definitely not foolproof but IMHO, to some extent, it is easier to describe what you expect to see than to implement it so I don't find it unreasonable to think it might provide some advantages in terms of correctness.
That definitely depends upon the situation. More often than not, properly testing a component takes me more time than writing it.
In my experience, this tends to be more related to instrumentation / architecture than a lack of ability to describe correct results. TDD is often suggested as a solution.
Given the issues with AWS with Kiro and Github, We already have just a few high-profile examples of what happens when AI is used at scale and even when you let it generate tests which is something you should absolutely not do.
Otherwise in some cases, you get this issue [0].
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
Don't "let it" generate tests. Be intentional. Define them in a way that's slightly oblique to how the production code approaches the problem, so the seams don't match. Heck, that's why it's good to write them before even thinking about the prod side.
The linked article does not speak of tests, it speaks of a team that failed to properly review an LLM refactor then proceeds to blame the tooling.
LLMs are good at writing tests in my experience.