There was a blog post on here the other day about moving back to bare metal hosting with practical explanations and problem solving.

Cloud is great but its just borrowing someone else's machine for a fee. It's like a hyper-scalable and granular mainframe but like all mainframes, the client is powerless without it.

We need to get our acts together and not allow ourselves to be smothered in silicon nimbus (clouds) and lose track of the open sky that is the internet.

I am not suggesting going back to the 2000s with a small tower PC under someone's desk with a blinking light running a business critical cron job.

Just that we need more appetite to take responsibility of our servers and systems. Yes it takes time to manage and get up to speed, but that's empowering right....?

> Yes it takes time to manage and get up to speed, but that's empowering right....?

I rather not, but instead these compute clouds should be treated similarly to utilities - you don't really want to be running your own generators for electricity do you? So why is it that we can setup utilities properly, where everybody can trust that it stays on and usable for a price low enough?

My theory is that vendor lock in is the cause. Cloud should be a commodity, but it isn't because nobody in the cloud business wants to be a commodity. Hence, they have every incentive to prevent it from happening.

I've seen how the cloud changed over 20 years from s3/ec2 in 2006 to what we have today. I've also seen how it's built at AWS. It's ironic they call it utility computing.

What I always feared as a user was that they'd invent a new billable metric, which happened a few times. Have you ever seen a utility add them at this pace? The length of your monthly usage report shows all those items at $0 that could eventually be charged. Let that sink in.

Another interesting element is that all higher level services are built on core services such as s3/ec2. So the vendor lock in comes from all propaganda that cloud advocates have conditioned young developers with.

Notice how core utilities in many countries are state monopolies. If you want it to be a true utility, perhaps that's the solution to get them started. The state doesn't need a huge profit, but it needs sovereignty and keep foreign agents out of its DCs. Is it inefficient? Of course. But if all you really need is s3/ec2 and some networking / queuing constructs, perhaps private companies can own higher tier / lock-in services while guaranteeing it runs on such a utility. This would provide their users reduced egress fees from a true utility which doesn't need (and is not allowed to have) a 50x profit on that line item.

The cloud providers themselves run their own generators. Computers aside, every place I worked at that did critical manufacturing ran their own generators as well. That said, I agree with your general statement, that this should be a commodity, but we’ve accepted that vendor lock in is better than having a department / the cost of humans lock-in.

I think Google has at least 1 datacenter that uses batteries instead of generators.

>In Belgium, we’ll soon install the first ever battery-based system for replacing generators at a hyperscale data center. In the event of a power disruption, the system will help keep our users’ searches, e-mails, and videos on the move—without the pollution associated with burning diesel.

https://blog.google/inside-google/infrastructure/cleaner-dat...

Disclosure: I work at Google, but no on anything related to this.

They only run the generators if the power company fails. To bring that analogy to servers, that would be running your loads in the cloud, but in the case of an outage, failing over to bare metal servers you run yourself. That sounds like the worst of both worlds to me.

If I lose power I can have a home generator running. These are incredibly common where I am due to hurricane outages - you’ll hear sometimes multiple on a single block powering whole homes.

S3 goes down, your entire data infrastructure (and systems attached to it) is out of reach. Local backups that can be deployed and run like S3 as if your services are still up, unless I’m mistaken, aren’t very common and would mean doing the thing you got S3 to handle in the first place. A generator is much easier to set up and use to solve the problem in a 1:1 way comparatively with zero dependence on local utilities and click on almost quite literally the second you lose power. Yeah they can’t run forever and aren’t the most cost efficient, but people use them weeks and have 100% function of all electronics in their home again.

I am not responsible for building and deploying these systems, I just depend on some of them at my job and interact with S3/media convert a ton. My understanding has always been that backups that can be restored very quickly are the aim, not trying to keep all the things attached to S3 running as if it’s still going. But if I am wrong please let me know and I would love to hear more from folks about this! I actually find this whole dance very interesting

Utilities prices are also pretty damn high right now, not sure if that’s the best way to choose going forward.

It's not unheard of near me in a rural area to have a diesel generator in your garage as the electricity's unreliable just enough of the time, and especially given working from home is common nowadays losing electricity for 6h now and again isn't acceptable. That's not running your own generator for electricity as in your analogy, that's having a reasonable backup for short term redundancy. I think it's reasonable for companies to act similarly, rely on cloud but have a backup that provides at least degraded functionality as opposed to none.

The most pernicious effect of Big Cloud is the general erosion of infra know-how in the general public. People increasingly need to go out of their way to learn about computer hardware, networking, databases, disaster recovery, etc, when the default business decision is to outsource those tasks to a cloud service provider. For Cloud to become a commodity, computer professionals need to understand both infra and software.

Because utilities are commodities and natural monopolies. CSPs are neither.

Sort of an aside but utilities are done differently in many states and the expansion they are doing specifically to accommodate data centers is going to massively increase consumer rates

I feel like current VPSs feel like a commodity, most work the same, and even have the same ui

Seems kinda obvious why they wouldn’t want to be a commodity.

It’d lead to direct comparisons between clouds (as someone who had estimated moving legacy work loads it’s a shit load of work to get a reasonable “drive away” price for one of them and the work is duplicated to do it for the other).

If they are truly commodities then than goes away as does their margins to large degree - none of them want that.

I believe this will only be possible once the provisioning in colo is as easy as or as fast as cloud provisioning.Think Hertzner but faster/more customizable , and for that the physical commodification would need to be more nimble/modular than it is now. Hyperscalars can drive down their costs by heavy customization of hardware/software while most colo and other builders are basically doing OEM machines or commercial rack units. The day one can be able to rent a space, spec out the machine, its network bandwith in lessthan a day and scale it in similar timeline - the power balance will shift a bit back from the hyperscalarers.

Sounds like you should apply to Oxide Computer.