If you’ve heard it a number of times and refuse to consider what people are saying then maybe I can’t help you.
I’m talking from personal experience of well over twenty years as both a developer, and for a while, a manager.
The slow part isn’t writing code.
It’s shipping it. You can have every one vibe coding until their eyes bleed and you’ve drained their will to live. The slowest part will still be testing, verifying, releasing, and maintaining the ball of technical debt that’s been accumulating. You will still have to figure out what to ship, what to fix, what to rush out and what to hold out until it’s right, etc. The more people you have to slower that goes in my experience. AI tools don’t make that part faster.
> If you’ve heard it a number of times and refuse to consider what people are saying then maybe I can’t help you.
What someone says “I’ve heard this a thousand times, but…”, it could be that the person is just stupidly obstinate but it could also mean that they have a considered opinion that it might benefit you to learn.
“More people slow down projects” is an oversimplified version of the premise in The Mythical Man Month. If that simplistic viewpoint held, Google would employ a grand total of maybe a dozen engineers. What The Mythical Man Month says is that more engineers slow down a project that is already behind. i.e. You can’t fix a late project by adding more people.
This does not mean that the amount of code/features/whatever a team can produce or ship is unrelated to the size of the team or the speed at which they can write code. Those are not statements made in the book.
Sure, I’m not writing a whole critical analysis of TMMM here and am using an aphorism to make a point.
Let’s imagine we’re going to make a new operating system to compete with Linux.
If we have a team of 10 developers we’re probably not going to finish that project in a month.
If we’re going to add 100 developers we’re not going to finish that project in a month.
If we add a thousand developers we’re still not going to finish that project in a month.
But which team should ship first? And keep shipping and release fastest?
My bet would be on the smaller team. The exact number of developers might vary but I know that if you go over a certain threshold it will slow down.
People trying to understand management of software projects like to use analogies to factory lines or building construction to understand the systems and processes that produce software.
Yet it’s not like adding more code per unit of time is adding anything to the process.
Even adding more people to a factory line had diminishing returns in efficiency.
There’s a sweet spot, I find.
As for Google… it’s not a panacea of efficiency from what I hear. Though I don’t work there. I’ve heard stories that it takes a long time to get small changes to production. Maybe someone who does work there could step in and tell us what it’s like.
As a rule though, I find that smaller teams with the right support, can ship faster and deliver higher quality results in the long run.
My sample size isn’t large though. Maybe Windows is like the ultimate operating system that is fast, efficient, and of such high quality because they have so many engineers working on it.
> using an aphorism to make a point.
But your “aphorism” is not true. You made a claim that more developers make a project slower. And you pointed to TMMM in support of that claim.
Now you seem to be saying “I know this isn’t really true, but my point hinges on us pretending it is.”
> Let’s imagine we’re going to make a new operating system to compete with Linux.
This is a nonsensical question. “Would you rather be set up to fail with 10 engineers or 1000”? Your proposed scenario is that it’s not possible to succeed there is no choice to be made on technical merit.
> But which team should ship first? And keep shipping and release fastest?
Assuming you are referring to shipping after that initial month where we have failed, the clear option is the largest of the teams. A team of 10 will never replicate the Linux kernel in their lifetimes. The Linux kernel has something like 5000 active contributors.
> I’ve heard stories that it takes a long time to get small changes to production.
There are many reasons it’s slow to ship changes in a company like Google. This doesn’t change the fact that no one is building Chrome or Android with a team of ten.
You’re right, I’m not making my point well.
You do need enough people to make complex systems. We can do more together than we can on our own. Linux started out with a small team but it is large today.
It runs against my experience though and I can’t seem to explain why.
My observation in my original post is that I don’t see why writing code is the bottleneck. It can be when you have too much of it but I find all the ancillary things around shipping code takes more time and effort.
Thanks for the discussion!
> It runs against my experience though and I can’t seem to explain why.
Your experiences are probably correct, but incomplete. More engineers on a project do come with more cost. Spinning up a new engineer is a net loss for some time (making the late project later) and output per engineer added (even after ramp up) is not linear. 5000 engineers working on Linux do not produce 5000x as much as Torvalds by himself. But they probably do produce more than 2500 engineers.
> Thanks for the discussion!
You too
> It’s shipping it. You can have every one vibe coding until their eyes bleed and you’ve drained their will to live. The slowest part will still be testing, verifying, releasing, and maintaining the ball of technical debt that’s been accumulating. You will still have to figure out what to ship, what to fix, what to rush out and what to hold out until it’s right, etc. The more people you have to slower that goes in my experience. AI tools don’t make that part faster.
This type of comments is all that is wrong with our industry. If "shipping it" is an issue there are a colossal failure throughout the entire organization. My team "shipped" 11 times yesterday, 7 on Monday, 21 on Friday... "shipping" is a non-event if you know what the F you are doing. If you don't, you should learn. If adding more people to help you with the amazing shit you are doing makes you slower, you have a lot of work to do up and down your ladder.
Maybe it's just my luck but most engineering teams I've worked with that were building some kind of network-facing service in the last 16-some-odd-years have tried to implementing continuous delivery of one kind or another. It usually started off well but it ends up being just as slow as the versioned-release system they used before.
It sounds like your team is the exception? Many folks I talk to have similar stories.
I've worked with teams to build out a well-oiled continuous delivery system. With code reviews, integration gating, feature flags, a blue-green deployment process, and all of the fancy o11y tools... we shipped several times a day. And people were still afraid to ship a critical feature on a Friday in case there had to be a roll-back... still a pain.
And all of that took way more time and effort than writing the code in the first place. You could get a feature done in an afternoon and it would take days to get through the merge queue, get through reviews, make it through the integration pipeline and see the light of production. All GenAI had done there was increase the input volume to the slowest part of the system.
People were still figuring out the best way to use LLM tools at that time though. Maybe there are teams who have figured it out. Or else they just stop caring and don't mind sloppy, slow, bloated software that struggles to keep one nine of availability.