I don't know jack about shit here, but genuinely: why migrate a live production system piecewise? Wouldn't it be far more sane to start building a shadow copy on Azure and let that blow up in isolation while real users keep using the real service on """legacy""" systems that still work?
Because it's significantly harder to isolate problems and you'll end up in this loop
* Deploy everything * It explodes * Rollback everything * Spend two weeks finding problem in one system and then fix it * Deploy everything * It explodes * Rollback everything * Spend two weeks finding a new problem that was created while you were fixing the last problem * Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component
So… create your shadow system piecewise? There is no reason to have "explode production" in your workflow, unless you are truly starved for resources.
Does this shadow system have usage?
Does it handle queries, trigger CI actions, run jobs?
If you test it, yes.
Of course, you need some way of producing test loads similar to those found in production. One way would be to take a snapshot of production, tap incoming requests for a few weeks, log everything, then replay it at "as fast as we can" speed for testing; another way would be to just mirror production live, running the same operations in test as run in production.
Alternatively, you could take the "chaos monkey" approach (https://www.folklore.org/Monkey_Lives.html), do away with all notions of realism, and just fuzz the heck out of your test system. I'd go with that, first, because it's easy, and tends to catch the more obvious bugs.
So just double your cloud bill for several few weeks, costing site like GitHub millions of dollars?
How do you handle duplicate requests to external services? Are you going to run credit cards twice? Send emails twice? If not, how do you know it's working with fidelity?
Why would you avoid a perfect opportunity to test a bunch of stuff on your customers?
If you make it work, migrating piecewise should be less change/risk at each junction than a big jump between here and there of everything at once.
But you need to have pieces that are independent enough to run some here and some there, and ideally pieces that can fail without taking down the whole system.
That’s a safer approach but will cause teams to need to test in two infrastructures (old world and new) til the entire new environment is ready for prime time. They’re hopefully moving fast and definitely breaking things.
A few reasons:
1. Stateful systems (databases, message brokers) are hard to switch back-and-forth; you often want to migrate each one as few times as possible.
2. If something goes sideways -- especially performance-wise -- it can be hard to tell the reason if everything changed.
3. It takes a long time (months/years) to complete the migration. By doing it incrementally, you can reap the advantages of the new infra, and avoid maintaining two things.
---
All that said, GitHub is doing something wrong.