> Most projects I've worked on tend to span multiple services though, so I think AI would struggle more trying to understand and coordinate across all those services versus having all the logic in a single deployable instance.
1. At least CC supports multiple folders in a workspace, so that’s not really a limitation.
2. If you find you are making changes across multiple services, then that is a good indication that you might not have the correct abstraction on the service boundary. I agree that in this case a monolith seems like a better fit.
Agreed on both counts. Though for the first one it's still easier to implement things when bugs create compile or local unit/integration test errors rather than distributed service mismatches that can only be caught with extensive distributed e2e tests and a platform for running them, plus the lack of distribution cuts down significantly on the amount of code, edge cases, and deployment sequencing that needs to be taken into account.
For the second, yeah, but IME everything starts out well-factored, but almost universally evolves into spaghetti over time. The main advantage monoliths have is that they're safer to refactor across boundaries. With distributed services, there are a lot more backward-compatibility guarantees and concerns you have to work through, and it's harder to set up tests that exercise everything e2e across those boundaries. Not impossible, but hard enough that it usually requires a dedicated initiative.
Anyway, random thoughts.