Seems pretty sensible to not rely on a single provider for their large complex system?

Man, you should have been there 6 months ago when they decided to start tearing down GitHub's own data centers and move everything exclusively to Azure. Seems they themselves realized this after they started moving, but imagine if you could have helped them realize this before they even started :)

Made me think. Why not convert Github datacenters into Azure datacenters that have Github as their sole customer?

Then it's up to Azure how they will manage this

That sounds like the worst of both worlds? The Azure devision that can't even reliably can't provide decent infrastructure products based on their own data center trying to do the same one a bespoke data center.

> Seems they themselves realized this after they started moving

I guess most people at Github knew exactly it makes no sense but they didn't really have a choice. Maybe some voiced their statement, got "we hear you" in response and were told to proceed anyway.

Yeah, I don't know how it went down, but I also know exactly how it went down:

Microsoft Execs: Everyone needs to move to Azure!

GitHub developers: But Azure is not gonna be able to handle our load, we literally have our own data centers!

Microsoft Execs: Sure, but you're Microsoft now, please publish blog post about how in half a year you'll be 100% on Azure.

Few months later...

GitHub Developer: We've tried our best, users are leaving in droves and Azure can't keep up!

Microsoft Execs: Ok fine, you can use something else too, but only if you mainly use Azure and continue publishing blog posts about how great Azure is.

Azure is the MS Teams of clouds.

This isn't a mom and pop shop. They have locations all over the world: https://datacenters.microsoft.com/

There's no intrinsic reason they should be vulnerable to themselves.

+1. Multi-cloud is typically done for vendor independence.

But Github don't have that rationale.

That website (for me) uses Cloudflare via WPEngine, which also isn't Azure

I mean, amazon (shopping, along with prime video e.t.c.) runs on AWS.

It was more "we built AWS to run our stuff and figured out we can sell it too".

While Azure feels like Temu clone of Cloud

actually incorrect. They figured they could sell unused hardware retail didn't need during non-peak, and retail could become more scalable. They went off in a corner with uncle andy for a year or 2 and built the basics. Like 10 years later retail was actually using AWS and not something that pretended it wasn't on aws. MAWS (being on aws not bare metal) was like a 2012-2015 thing and took for ever for NAWS (native aws) to happen that wasn't apollo, tho amazon still loves apollo in many places. Kinda a dirty secret, retai wasn't on aws until after aws was really popular.

When I was at AWS, retail was not yet running on AWS. Has that changed?

Prime video does use some AWS services, but live and on-demand are two entirely different beasts.

Really? I thought retail was. It's been almost a decade since I worked at prime video but I think everything was running on AWS. (Some things didn't use brazil etc, but I think all the servers etc. were on AWS)

It's a distinction without a difference. All new development is nAWS (native AWS) legacy is mAWS (not sure about the acronym) which is still AWS under the hood and is mostly just a pool of EC2 instances with preconfigured networks. Nothing made in the last five or six years is on maws, and amazon is a micro service shop so things are always being built new. If you joined today there's a good chance you'd join a team without any maws infra

MAWS is “Move to AWS”, the name of the internal campaign to get legacy services into a somewhat-retrofitted AWS environment. It was a single VPC at one point.

I just finished a nearly five year stint at amazon and didn't realize there was pre-maws stuff still around. Never encountered any of it. I was like two months from my yellow badge but, uh, life is really better outside amazon.

many parts of AWS are not on AWS, and there's reasons to have bare metal but it's not as common and aws gives you good access in most cases.

Prime video uses a non-AWS CDN when I watch football on it here in the UK

The BBC were unable to find a single CDN that could serve the UK during its peak football matches. https://www.bbc.co.uk/webarchive/https%3A%2F%2Fwww.bbc.co.uk...

that's called load balancing and regional availability. many companies do multi-cdn. in fact it's smart to use multiple CDNs so you can do better in contract time. Twitch uses IVS but we have failover to other CDNs for very large events.

I'd believe it - CloudFront always felt a bit like AWS ticking a box ("we have a CDN") rather than being a good to use product.