Actually yes! I saw this post some months ago, and thought to myself: "Wow this is really close to what we've been building". Kiro uses three files though: requirements, design, and then tasks. The requirements doc is a bunch of statements that define all the edge cases you might not have originally thought of. Design looks at what is currently in the code, how the code implementation differs from the requirements, and what technical changes need to happen to resolve the difference. Then tasks breaks the very large end to end development flow up into smaller pieces that an LLM can realistically tackle. The agent then keeps track of it's work in the tasks file.
Realistically, I don't think that Harper's statement of "I get to play cookie clicker" is achievable, at least not for nontrivial tasks. Current LLM's still need a skilled human SDE in the loop. But Kiro does help that loop run a lot smoother and on much larger tasks than a traditional AI agent can tackle.
Thank you, I will certainly check this out because this is something I've been sort of doing, manually, but I am still struggling to get the right workflow.
This recent OpenAI presentation might resonate too then:
Prompt Engineering is dead (everything is a spec)
In an era where AI transforms software development, the most valuable skill isn't writing code - it's communicating intent with precision. This talk reveals how specifications, not prompts or code, are becoming the fundamental unit of programming, and why spec-writing is the new superpower.
Drawing from production experience, we demonstrate how rigorous, versioned specifications serve as the source of truth that compiles to documentation, evaluations, model behaviors, and maybe even code.
Just as the US Constitution acts as a versioned spec with judicial review as its grader, AI systems need executable specifications that align both human teams and machine intelligence. We'll look at OpenAI's Model Spec as a real-world example.
Have you considered a fourth file for Implemented such that Spec = Implemented + Design?
It would serve both as a check that nothing is missing from Design, and can also be an index for where to find things in the code, what architecture / patterns exist that should be reused where possible.
And what about coding standards / style guide? Where does that go?
That is interesting. So far we are just using the task list to keep track of the list of implemented tasks. In the long run I expect there will be an even more rigorous mapping between the actual requirements and the specific lines of code that implement the requirements. So there might be a fourth file one day!
Actually yes! I saw this post some months ago, and thought to myself: "Wow this is really close to what we've been building". Kiro uses three files though: requirements, design, and then tasks. The requirements doc is a bunch of statements that define all the edge cases you might not have originally thought of. Design looks at what is currently in the code, how the code implementation differs from the requirements, and what technical changes need to happen to resolve the difference. Then tasks breaks the very large end to end development flow up into smaller pieces that an LLM can realistically tackle. The agent then keeps track of it's work in the tasks file.
Realistically, I don't think that Harper's statement of "I get to play cookie clicker" is achievable, at least not for nontrivial tasks. Current LLM's still need a skilled human SDE in the loop. But Kiro does help that loop run a lot smoother and on much larger tasks than a traditional AI agent can tackle.
Thank you, I will certainly check this out because this is something I've been sort of doing, manually, but I am still struggling to get the right workflow.
This recent OpenAI presentation might resonate too then:
Prompt Engineering is dead (everything is a spec)
In an era where AI transforms software development, the most valuable skill isn't writing code - it's communicating intent with precision. This talk reveals how specifications, not prompts or code, are becoming the fundamental unit of programming, and why spec-writing is the new superpower.
Drawing from production experience, we demonstrate how rigorous, versioned specifications serve as the source of truth that compiles to documentation, evaluations, model behaviors, and maybe even code.
Just as the US Constitution acts as a versioned spec with judicial review as its grader, AI systems need executable specifications that align both human teams and machine intelligence. We'll look at OpenAI's Model Spec as a real-world example.
https://youtu.be/8rABwKRsec4?si=waiZj9CnqsX9TXrM
That's a compelling three file format.
Have you considered a fourth file for Implemented such that Spec = Implemented + Design?
It would serve both as a check that nothing is missing from Design, and can also be an index for where to find things in the code, what architecture / patterns exist that should be reused where possible.
And what about coding standards / style guide? Where does that go?
That is interesting. So far we are just using the task list to keep track of the list of implemented tasks. In the long run I expect there will be an even more rigorous mapping between the actual requirements and the specific lines of code that implement the requirements. So there might be a fourth file one day!
Coding standards / style guide are both part of the "steering" files: https://kiro.dev/docs/steering/index