Sorry, I'm going to be critical:
"We follow a strict 5-phase discipline" - So we're doing waterfall again? Does this seem appealing to anyone? The problem is you always get the requirements and spec wrong, and then AI slavishly delivers something that meets spec but doesn't meet the need.
What happens when you get to the end of your process and you are unhappy with the result? Do you throw it out and rewrite the requirements and start from scratch? Do you try to edit the requirements spec and implementation in a coordinated way? Do you throw out the spec and just vibe code? Do you just accept the bad output and try to build a new fix with a new set of requirements on top of it?
(Also the llm authored readme is hard to read for me. Everything is a bullet point or emoji and it is not structured in a way that makes it clear what it is. I didn't even know what a PRD meant until halfway through)
> So we're doing waterfall again?
I think the big difference between this and waterfall is that waterfall talked about the execution phase before the testing phase, and we have moved past defining the entire system as a completed project before breaking ground. Nothing in defining a feature in documentation up front stops continuous learning and adaptation.
However, LLMs and code breaks the "Working software over comprehensive documentation" component of agile. It breaks because documentation now matters in a way it didn't when working with small teams.
However, it also breaks because writing comprehensive documentation is now cheaper in time than it was three years ago. The big problem now is maintaining that documentation. Nobody is doing a good job of that yet - at least that I've seen.
(Note: I think I have an idea here if there are others interested in tackling this problem.)
> So we're doing waterfall again?
The waterfall we know was always a mistake. The downhill only flow we know and (don't) love was from someone at DOD who only glanced at the second diagram (Figure 2) in the original 1970 Royce paper and said "This makes sense, we'll do it!" and... we're doing waterfall.
So, go to the paper that started it all, but was arguing against it:
- https://www.praxisframework.org/files/royce1970.pdf
I encourage you to look at the final diagram in the paper and see some still controversial yet familiar good ideas:
Crucially, these arrows go backwards.See also the "Spiral Model" that attempts to illustrate this a different way: https://en.wikipedia.org/wiki/Spiral_model#/media/File:Spira...
Amazing that waterfall arguably spread from this paper, where it's actually an example of "what not to do."
Here's what Royce actually says about the waterfall diagram:
The implementation described above is risky and invites failure. … The testing phase which occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from analyzed. These phenomena are not precisely analyzable. … Yet if these phenomena fail to satisfy the various external constraints, then invariably a major redesign is required. … The required design changes are likely to be so disruptive that the software requirements upon which the design is based and which provides the rationale for everything are violated. … One can expect up to a 100-percent overrun in schedule and/or costs.
This is 55 years ago.
That "Spiral Model" sure looks like an OODA loop.
Waterfall is what works for most consulting businesses. Clients like the buzz of agile but they won't budge on scope, budget or timeframe. You end up being forced to do waterfall.
Yep. And you often end up doing waterfall with a veneer of agile that ends up being worse than either one.
this has been my experience too, its horrible because everyone does all the agile meetings and "planning" but its just used as progress reporting to the product managers... if thats all thats 'agile' being used for just do daily reporting and be done with it
Waterfall might be what you need when dealing with external human clients, but why would you voluntarily impose it on yourself in miniature?
Because agile is a project management process, not an engineering practice. The value of sprints is in delivering product at the end of every sprint. If that's not happening, because the client isn't interested, and you're not getting product feedback from your customer who is the only person whose feedback actually matters, and using that feedback to determine the tasks that go into the next sprint (including potentially cancelling tasks for work the customer is no longer interested in), then you're actually slowing the project down by forcing people to work on fit and finish every sprint before they need to (i.e. project completion).
That's not to say that you shouldn't anyway have good engineering practice, like short-lived branches and continuous integration. But you should be merging in branches on a schedule that is independent of sprints (and hopefully faster than the sprint length).
OP here. I wouldn't necessarily call it a waterfall, but it's definitely systemized. The main idea was to remove the vibe from vibe coding and use the AI as a tool rather than as the developer itself. By starting off with knowing exactly what we want to develop on a high(ish) level (= PRD), we can then create an implementation plan (epic) and break it down into action items (tasks/issues).
One of the benefits of using AI is that these processes, which I personally never followed in the pre-AI era, are now easy and frictionless to implement.
I think for me personally, such a linear breakdown of the design process doesn't work. I might write down "I want to do X, which I think can be accomplished with design Y, which can be broken down into tasks A, B, and C" but after implementing A I realize I actually want X' or need to evolve the design to Y' or that a better next task is actually D which I didn't think of before.