I don’t think it’s paid shilling, it’s dogma that reflects where people are working here. The individual engineers are hammers and AWS is the nail.
AWS/Azure/GCP is great, but like any tool or platform you need to do some financial/process engineering to make an optimal choice. For small companies, time to market is often key, hence AWS.
Once you’re a little bigger, you may develop frameworks to operate efficiently. I have apps that I run in a data center because they’d cot 10-20x at a cloud provider. Conversely, I have apps that get more favorable licensing terms in AWS that I run there, even though the compute is slower and less efficient.
You also have people who treat AWS with the old “nobody gets fired for buying IBM” mentality.
I think a lot of engineers who remember the bare metal days have legitimate qualms about going back to the way that world used to work especially before containerization/Kubernetes.
I imagine a lot of people who use Linux/AWS now started out with bare metal Microsoft/VMWare/Oracle type of environments where AWS services seemed like a massive breath of fresh air.
I remember having to put in orders for pallets of servers which then ended up storage somewhere because there were not enough people to carry and wire them up and/or there wasn't enough rack space to install them.
Having an ability to spin up a server or a vm when you need it without having to ask a single question is very liberating. Sometimes such elasticity is exactly what's needed. OTOH other people's servers aren't always the wise choice, but you have to know both environments to make the right choice, and nowadays I feel most people don't really know anything about bare metal.
I spin up a VM on my xen vm estate whenever I want it with just some clickops or teraform (depending on the environment)
What do you think the pallets of servers were intended for
That's the beauty of VMs.
Luckily, Amazon is far from the only VM provider out there, so this discussion doesn't need to be polarized between "AWS everything" and "on-premise everything". You can rent VMs elsewhere for a fraction of the cost. There are many places that will rent you bare metal servers by the hour, just as if they were VMs. You can even mix VMs and bare metal servers in the same datacenter.
Not just spin up a server, you can spin up whole regions, even in foreign countries, at the click of a button.
the best is having rackspace & power but not enough cooling, hahaha murder me
That only happens when you have your own data center. That's a whole different issue and most people with their own hardware don't have their own data centers as it's not particularly cost efficient except at incredibly large scale.
Containers with k8s and bare metal aren't mutually exclusive.
If anything it enables a hybrid environment
No doubt -- there are plenty of downsides to running your own stuff. I'm not anti-AWS. I'm pro-efficiency, and pro making deliberate choices. If there's a choice is spend $10M extra on AWS because the engineers get a good vibe -- there should be a compelling reason why that vibe is worth $10M. (And there may well be)
Look at what Amazon/Google/Microsoft does. If you told me you advocate running your own power plants, I'd eyeroll. But... if you're as large a power consumer as a hyper-scaler, totally different story. Google and Microsoft are investing in lighting up old nuclear plants.
My company runs all their own bare metal data centers but it's containerized, and it's basically magic.
The tooling should be getting close to manage this on-prem now, with VM's, K8s clusters, networking, storage, etc. I know that oxide computers exists, and they look fantastic, but there has got to be more 'open' ways to run things on your own Dell/HP/Supermicro servers with NVMe drives. Especially since VMware has jacked up their prices since being acquired.
Talos OS looks really interesting. But I also need the storage parts, networking parts, etc.
I run several Talos clusters (provisioned by Cluster API) on commodity hardware which is part of a Proxmox cluster in my homelab
teams can't even run k8s in the cloud. teams I've seen running k8s on prem have always been disaster shows. productivity in the gutter.
the tooling to manage on prem is truly awful, and attempts to port the nice semantics of cloud have all slowly died (who remembers localstack?)