The author is a little confused. A system that blocks releases on defects and doesn't pin versions is continuous integration, not a monorepo. The two are not synonymous. Monorepos often use continuous integration to ensure their integrity, but you can use continuous integration without a monorepo, and monorepos can be used without continuous integration.

> But the migration had a steep cost: over 6 years later, there are thousands of projects still stuck on an older version.

This is a feature, not a bug. The pinning of versions allows systems to independently maintain their own dependency trees. This is how your Linux distribution actually remains stable (or used to, before the onslaught of "rolling release" distributions, and the infection of the "automatically updating application" into product development culture, which constantly leaves me with non-functional Mobile applications whereupon I am forced to update them once a week). You set the versions, and nothing changes, so you can keep using the same software, and it doesn't break. Until you choose to upgrade it and deal with all the breaking shit.

Every decision in life is a tradeoff. Do you go with no version numbers at all, always updating, always fixing things? Or do you always require version numbers, keeping things stable, but having difficulty updating because of a lack of compatible versions? Or do you find some middle ground? There are pros and cons to all these decisions. There is no one best way, only different ways.

For me the comparison to monorepo made a lot sense. One of the main features of monorepo is maintaining a DAG of dependencies and use that to decide which tests to run given a code change. CRAN package publishing seems to follow same idea.

> One of the main features of monorepo is maintaining a DAG of dependencies

No, that's the opposite of a monorepo (w/continuous integration). A monorepo w/continuous integration does not maintain any list of dependencies or relationships, by design. Every single commit is one global "version" which represents everything inside the repo. Everything in the repo at that commit, is only guaranteed to work with everything else in the repo in that commit. You use continuous integration (w/quality gates) to ensure this, by not allowing merges which could possibly break anything if merged.

Maintaining a DAG of dependencies is a version pinning strategy, the opposite of the continuous integration version-less method. It is intended for external dependencies that do not exist in the current repository - which is why it's used for multi-repos, not monorepos.

But as I originally pointed out, you can have a monorepo where everything is version-pinned (not using continuous integration). It's just not the usual example.

A lot of monorepo strategies that I've seen involve maintaining a DAG of dependencies so that you don't need to run CI over the entire system (which is wasteful if most of the code hasn't changed), but only a specific subset.

Each component within the monorepo will declare which other components it depends on. When a change occurs, the CI system figures out which components have changed, and then runs tests/build/etc for those components and all their dependencies. That way, you don't need to build the world every time, you just rebuild the specific parts that might have changed.

I think that specific concept (maintaining a single "world" repository but only rebuilding the parts that have changed in each iteration) is what the author is talking about here. It doesn't have to be done via a monorepo, but it's a very common feature in larger monorepos and I found the analogy helpful here.

That's a cool thing to have, and I'm glad you found the analogy helpful, but I hope you understand the CI DAG you're talking about is not making anything more stable. It is just to cache build jobs. To make things more stable (what the post is referring to) you need a separate mechanism; in a monorepo w/CI, that's gating the merge on test results (which doesn't require a DAG). (And actually, if you skip tests in a monorepo, you are likely to eventually miss systemic bugs)

That's something you can do just as well with multiple repos though

What a monorepo gives you on top of that is that you can change the dependents in the same PR

for me too - in a way a "virtual" monorepo - as if all these packages belong in some ideal monorepo, even though they don't.

The problem with pinning dependencies is clashing transitive dependencies over a bunch of dependencies. For me this happens in python every third time I try to run sth new even though version numbers are pinned (things can still fail in your system, or you may want to include dependencies with incompatible transitive dependencies). I have never happened with R, and now I know why.

The actual trade off is end user experience and ease, vs package developer experience and ease. It is not about updating R or a package; it is when somebody tries to create or run a project not getting into a clash of dependencies for reasons that can hardly be controlled by either them or the package developer.

Stability vs security, that is what pinning gives you and is why rolling releases are more popular these days. No?

Rolling releases are popular because people got sick of waiting two years to upgrade their distro and get the new version of some Linux app, because one version of a distro keeps the same old version of the Linux apps forever (in the stable tree). unstable and testing branches have newer releases, but as the name implies, it breaks quite a bit.

So rolling releases are like an unstable/testing branch, with more effort put into keeping it from breaking. So you get new software all the time. The downside is, you also don't get to opt-out of an upgrade, which can be pretty painful when the upgrade breaks something you're used to.

> There is no one best way

I think that the laws of physics dictate that there is. If your developers are spawning the galaxy, the speed of development is slower with continuous development than with pinning deps.

We don't know the laws of physics though. We just have models which can both fit into human brain to a point that they are surprisingly good for what humans have an opportunity to experiment agaisnt. That is really awesome, but it doesn't mean we know.

It’s partial knowledge.

Saying “We don’t know.” feels more wrong to me than “We know.” (emphasis on the periods).

They’re not confused. It’s an analogy.

I don't see that. What I do see is an interesting thought experiment in the title and then zero delivery in the body text.

The term clickbait comes to mind.