I completely see the benefit for this strategy. Defaulting to something like this would seem to inflate costs though, as a tradeoff for time. I know certain LLM usages can be pretty pricy. I hope that something like this doesn't become the default though as I can see parallelization being a dark pattern for those making money off of token usage.

I don’t think it’s a great representation on of the utility of worktrees or even efficient practical use of agents.

It pretty much is though. This is exactly what you'd do if you had 100 different employees.

I wouldn't ask for 100 different versions of the same feature from each of them.

1 agent is supposed to be powerful with proper context and engineering design decisions in mind - whether UI or backend.

Asking 3 different agents to do the same engineering task reeks of inefficient or ineffective development patterns with agents.

> I wouldn't ask for 100 different versions of the same feature from each of them.

You wouldn't because human labor is too expensive to make it worthwhile. You would if it were too cheap to meter.

We actually do that at the scale of society - that's market competition in a nutshell. Lots of people building variants of the same things, then battling it out on the market. Yes, it's wasteful as hell (something too rarely talked about), but we don't have a better practical alternative at this point, so there's some merit to the general idea.

(Also the same principle applies to all life - both in terms of how it evolves, and how parts of living organisms work internally. Actively maintained equilibria abound.)

> Also the same principle applies to all life

Actively maintained equilibria abound, but this is not typically the mechanism. Different species in adjacent niches aren't better or worse versions of the same organism to be evaluated and either selected or discarded. It's more typical for them to adopt a strategy of ecological segmentation so that they can all have their needs met. Every few years moths migrate to my state to reproduce - and they do so before our local moths have woken up for the season, and leave around the time they do, so that they aren't in competition. Birds that feed from the same trees will eat from different parts of the tree and mate at different times, so that their peak energy consumption doesn't line up. What would the benefit be in driving each other to extinction?

Evolution doesn't make value judgments, it doesn't know which species is better or worse and it doesn't know how future climactic shifts will change the fitness landscape. Segmentation is both easier and a hedge against future climactic shifts.

Engineering works under a very different logic where the goal is optimal performance in a controlled environment for an acceptable service life, not satisfactory performance with extremely high robustness in the face of unknown changes into the perpetual future. When we rank different systems and select the most optimal, we are designing a system that is extremely brittle on geologic timescales. Abandon a structure and it will quickly fall apart. But we don't care because we're not operating at geologic timescales and we expect to be around to redesign systems as their environment changes to make them unsuitable.

Similarly, the reproduction of labor/capacity in markets you described could be viewed as trading efficiency for robustness instead of as waste. Eg, heavily optimized supply chains are great for costs, but can have trouble adapting to global pandemics, wars in inconvenient places, or ships getting stuck in the wrong canal.

I actually don’t use them that way. I use 100 different agents on 100 different worktrees to develop 100 different apps for the overall project.

That’s what I’m advocating for. That’s not what was demonstrated in the blog

in frontend exploratory random work might have some value if you dont know what you need.

both seem valid uses of this synthetic intelligence to me

what if you have 100 lint errors that you can parallelize fixing to 100 small local 1B llms

This is exactly what I would do.

Without agent collaboration, you'll need a whole tree of agents just to resolve the merge conflicts.

usually the orchestrator or planner that spawns the sub-agents is the -collaboration- protocol as it has visibility to all others and can start/kill new ones at wish as it sees fit and coordinate appropiately but yea

This is discussed in TFA. The absolute costs are negligible, particularly in comparison to the time saved.

I think the most annoying part is when a coding agent takes a particularly long time to produce something. AND has bad output, it is such a time sink / sunk cost