False. In order to maintain high quality I often rejected the first result and regenerated the code with a more precise prompt, rather than taking the first result. I also regularly used "refactor prompts" to ask Kiro to change the code to match my high expectations.

Just because you use AI does not mean that you need to be careless about quality, nor is AI an excuse to turn off your brain and just hit accept on the first result.

There is still a skill and craft to coding with AI, it's just that you will find yourself discarding, regenerating, and rebuilding things much faster than you did before.

In this project I deliberately avoided manual typing as much as possible, and instead found ways to prompt Kiro to get the results I wanted, and that's why 95% of it has been written by Kiro, rather than by hand. In the process, I got better at prompting, faster at it, and reached a much higher success rate at approving the initial pass. Early on I often regenerated a segment of code with more precise instructions three or four times, but this was also early in Kiro's development, with a dumber model, and with myself having less prompting skill.

> precise prompt

If there was such a thing you would just check in your prompts into your repo and CI would build your final application from prompts and deploy it.

So it follows that if you are accepting 95% of what random output is being given to you. you are either doing something really mundane and straightforward or you don't care much about the shape of the output ( not to be confused with quality) .

Like in this case you were also the Product Owner who had the final say about what's acceptable.

The above is saying more precise not completely precise. The overall point they're making is you still are responsible for the code you commit.

If they are saying the code in this project was in line with what they would have written, I lean towards trusting their assessment.