> In a true monorepo ...

ideally yes. However, such a monorepo can become increasingly complex as the software being maintained becomes larger and larger (and/or more and more people work on it).

You end up with massive changes - which might eventually become something that a single person cannot realistically contain within their brain. Not to mention clashes - you will have people making contradictory/conflicting changes, and there will have to be some sort of resolution mechanism outside (or the "default" one, which is first come first served).

Of course, you could "manage" this complexity by attributing api boundary/layers, and these api changes are deemed to be important to not change too often. But that simply means you're a monorepo only in name - not too different from having different repos with versioned artefacts with a defined api boundary.

>Of course, you could "manage" this complexity by attributing api boundary/layers, and these api changes are deemed to be important to not change too often. But that simply means you're a monorepo only in name - not too different from having different repos with versioned artefacts with a defined api boundary.

You have visibility into who is using what and you still get to do an atomic update commit even if a commit will touch multiple boundaries - I would say that's a big difference. I hated working with shared repos in big companies.

They don’t have to be massive changes. You can release the feature with with backwards compatibility and then gradually update dependencies and remove the old interface.

I think the way to go is that you do such big backwards incompatible refactors gradually. Eg you want to make all the callers specify some additional parameter. So first you create a version of your API which populates this parameter with some reasonable default. Then old API is marked deprecated and is just calling new API with that default value, and then you inline everywhere the old api. After a while it’s possible to remove the old API.

That said you need of course some tooling to somehow discover all the callers reliably and do those migrations on a large scale.

Easier to do if all the code is owned by one org but harder if you can’t reliably tell who’s using your APIs.

However having centralized migrations is really saving a lot of work for the org.