PC is the last major open platform. While other platforms like Android and becoming less open, PC in general is becoming more open than it's been in a long time as heavy MacOS/Android/iOS competition is creating a focus on open standards and all-time high strong Linux support gives people a place to land and tinker/hack to their heart's content.

I think we will see an abandonment of consumer grade PC components and individuals are either pushed towards closed hardware like Playstation, MacBooks, and Android devices or they are pushed towards server grade components. I already have home sever rack, and would recommend it for other people.

> I already have home sever rack, and would recommend it for other people.

I just want to warn people who haven't heard server-grade hardware in-person before: this is only for people who can put a server rack somewhere unpopulated like a garage or basement. Servers will make you think "wow, leafblowers sure are quiet". They are not suitable for apartment dwellers such as myself. When I was setting up my 1U before shipping it off to a colo, I wrote scripts and had detailed plans of the things I needed to run so I could minimize the time it was making my ears bleed.

The noise problem is pretty easy to mitigate by choosing 2U servers instead of 1U. The latter are forced by the form factor to use smaller, higher speed fans.

A bigger issue for enterprise hardware is that it's optimized for performance per watt under load, not idle power consumption. Running a mostly-idle rack server 24/7 can result in a pretty sizable electric bill. This also depends heavily on the model. Some will idle at ~50 watts, others at ~300, but both of these are significantly higher than a Raspberry Pi or an old laptop which for personal use will generally do the job.

Business class desktops are also a good alternative here. Many models have pretty reasonable idle power consumption (check this for yourself, I've seen 6W but also 60W) and then you get a couple of drive bays and PCIe slots and expandable RAM which you don't get from a Raspberry Pi.

These days, pretty much the only thing that makes sense is a mini PC. AMD laptop chips generally trade blows with Apple stuff on power efficiency when you thrash them, and you get a surprisingly capable machine for not very much money.

It's really not worth it to run old hardware 24/7 unless it's making money. Buying a new machine of equivalent capability is (normally) pretty cheap, and it doesn't take very long for the power savings to pay for themselves.

They can be had with fairly respectable specs too. Certainly enough to play around with small local models.

"When you thrash them" is kind of the issue. There are ten year old business desktops with a <10W idle power consumption. If your use for it is to have something to rsync files to and host your personal website and the like, even old hardware is going to average 99% idle. There is no meaningful power savings from newer hardware unless you're consistently putting it under significant load.

Some of the newer hardware is actually worse because the idle power consumption of PCs since around 2010 is determined in significant part by the low-load efficiency of the power supply. Brand new machines with the wrong power supply can use several times as much power at idle as ten year old machines with the right power supply. Annoyingly, power supply efficiency at idle is rarely documented so the only thing to do is measure it.

[deleted]

Kind of a random aside, but I never realized how obnoxious LEDs were until I got a studio apartment and started sleeping in the same room as my homelab / workstation / networking hardware. Electrical tape saved me, but wow. You sure can produce a lot of light with a milliwatt of electricity :)

(And yes, my workstation has a clear case and LED RAM. Yes, I'm an idiot. Whenever Windows applies an update late at night, I wake up if it turns back on. I don't know what I was thinking when I built that thing, but never again.)

I always thought it would be low-grade hilarious to record a fairly long video of the unboxing and assembly of a ridiculously elaborate in-case LED setup, only to reveal with a straight face and at the absolute last minute that the case in question is entirely opaque.

I like to put a little red wax over LEDs (at least, ones that I don’t touch). That way you can still see them, but they are dimmer, and the red tint makes the light less annoying at night.

Is it even possible to buy computers these days that don't look like they're intended to be the lighting system at a rave?

Yes it is.

Even worse are phone chargers, intended to be used next to your bed, that light up like a Christmas tree when running. Black electrical tape is great for the worst of it, but you still need a few things available to tell you the operational status, if only they'd dim them a bit.

You're right, I may have significantly over-estimated the percentage of people on hn that have dealt with server hardware. It's expensive, big, loud, power hungry and temperature sensitive.

You can buy server boards that don’t require loud fans. If you’re buying used server gear from a datacenter then it will be like what you said.

I have a 4U NAS with a supermicro board and an i3 chip with 6 WD Red NAS drives and it’s very quiet. The chassis came without fans so I installed the brand I like.

no, you definitely cannot. you probably have a consumer board in a 4u case

Completely wrong.

Tell my youve never owned a supermicro board without telling me. They support regular 80mm/90mm Noctua and function just fine. There's specific supermicro mounts for me.

> my 1U

1Us have the most compromised ventilation and compensate with loud fans running at high speeds.

Yeah, it certainly wasn't the quietest choice for form-factor, but the fact remains: all server grade hardware are not optimizing for noise. They are meant to be running in datacenters, not livingrooms, so noise was never a concern for them. A nice thing about consumer-grade hardware is they are optimizing for both sound and power consumption because those devices are designed to be around humans. So I certainly hope consumer-grade hardware survives.

In my first job we worked in a room full of 4Us and it was always refreshing when we powered them all down for the weekend. So quiet. It’s almost like there was a reason why consumer-grade hardware existed.

I have a 4U with noctua fans and the loudest part of my rack is the harddisks

Not only the loudness, the small fans have subjectively more annoying sound even if they were the same volume. Much more shrill than a large fan.

Sure. But are there actual limits on how much noise they're allowed to make?

This is all built to be put in a place where noise is not an issue

You can make those rackmounted servers as loud or as quiet as you like. For home, optimize quiet (and low power consumption).

Even though my server rack is in the garage I try to keep it quiet. A couple of them are fanless Atom-based and others have fans but they are built to be quiet. If you need hardware that generates a lot of heat, go with 4U for large fans that spin slow, thus low noise.

The "wow, leafblowers sure are quiet" happens when you stuff a lot of heat generation into a 1U chassis that then requires lots of tiny fans running at full speed. Those you don't want at home! But it is easy to avoid. Data centers do this to maximize density, but that's unlikely to matter at home.

>Atom-based

Not exacty enterprise grade servers then?

Supermicro sells Atom-based SKUs with enterprise features like a BMC+IPMI, 10Gb SFP+ ports, ECC memory, SFF-8087 ports, chassis intrusion detection, etc.

And do you need a full-on enterprise-grade server? Given the choice between a 1U server whose fans even at minimal utilisation can still be heard three doors away and something with a low-power/laptop-grade CPU that does the same job silently and with little power use, I'll take the latter.

If you build your own servers you can make them silent.

I had a 2U Xeon beast I kept water-cooled. Before I installed the water cooling, a bit noisy and 60C. Afterwards, total silence and 30C.

I sit next to my 4U server with all enterprise components apart from fans - these are consumer grade.

I had to mod the chassis slightly (with just pliers, tape and random inserts) to fit these fans in there, and add fans in front to push the air in. The PSU that came with it was obnoxiously loud, but thankfully, Supermicro has a quiet version that I can't even hear. Even if SM didn't have this PSU, I could have easily modified the PSU and fit some noctuas in there without any issue or safety concerns - like I did with my enterprise grade Mikrotik switch that also had obnoxious fans by default.

I even have an enterprise grade UPS that is dead silent when it's not running on battery power (I swapped the fans there too).

I essentially try to buy enterprise gear whenever possible. Not only is it usually much better than the consumer alternative, but it also is frequently much cheaper too because of second hand market. Before AI sucked the soul out of the hardware market in general, you could have bought enterprise SSDs that had life expectancy - TBW - measured in petabytes, and MTFB - practically never - for half the price of the top consumer SSD that had TBW measured in tens of TB and MFTB of yesterday.

And the entire rack is just slightly more louder than the PC I was using.

The only consumer grade computer at my home is my MacBook and my phone.

Enterprise SSDs are all that. Just make sure you power it up. For data retention without power the requirements are 3mo for enterprise vs 1yr for consumer grade.

I had to provision a 1U server in grad school. Turing that thing on in the office was a joke. Completely impossible to work with it on if you were anywhere in that part of the office.

I built PCs for a number of years and then I shifted to some combinations of RPis, MacBooks, and (maybe) Mac Minis. It was a (long) phase that involved quite a bit of money as well as frustration oftentimes but almost certainly not going to do it again.

and i dont know whats the reliability gain from consumer to "server" hardware. 1 x 9? hot plug power supplies? definitely more ram slots i guess...

meanwhile..i see axiomtek industrial computers that dont even have a power button sold with 7yr warranties..

A 4U case is basically just a midsized tower with rack ears (or rails).

A 1U case runs the gamut in noise from vacuum to jet-engine.

I had exactly this problem, 1U server that sounded like a 747 taking off downwind. I solved it by getting a mini-PC that had more processing power than the eBayed 1U server (I just looked up what was available in terms of CPUs and got the best bang per buck, an 8C16T AMD CPU) and that runs essentially silent except when it's under load - they're designed for low-power/silent operation. If you're running your server at 100% load 24/7 then this isn't for you, but for home "server" use it was ideal.

This. At one company we ran out of space in the server room, so the excess machine temporarily landed next to my desk. Dear god. Noise cancelling headphones couldn't cope with the noise.

If you’re living in an apartment I definitely could it seeing being non viable, but if you’re in a house I don’t think it’s a big deal.

Every house I’ve lived in has had machinery for water pumping and heating and we just put our server along with them.

Reminds me of when I as a kid got one of those Delta 7000rpm fan powered cpu coolers, my mom promptly asked what it would cost to make that noise (that was heard in the entire apartment) go away. Got a Zalman (back when they were great) and everything was good.

It was a learning experience, and I think everyone should experience that kind of industrial noise at least once to appreciate how quiet consumer hardware is.

I remember a review from back then which contained the phrase "strap on the 7000rpm from hell". Those Delta's definitely weren't quiet.

"I think we will see abandonment of consumer grade PC components and individuals are either pushed towards closed hardware like Playstation, MacBooks, and Android devices or they are pushed towards server grade components."

What about "industrial" grade and "development" boards

I have been using a "server" OS (no graphics, not Linux) as a "client" OS for many years. IME, the above hardware categories work well enough with "server" OS. I would welcome being "pushed" towards server grade components

> PC is the last major open platform.

In the whole history of computing PC is the only platform where buying a computer means crazy number of options and configuration mixes to choose from and expect it to work! And warranty would support it too! You can run any OS of your choice on it and that's also reasonable expectation.

Any other platform (SUN, Be, Amiga, NeXT, Apple) it was always buying it from one company only from its list of products. And even running with a different version of OS means warranty doesn't cover it.

I came back to this comment 12+ hrs later hoping to find someone make a great argument for some platform in the 70s that I didn't know enough about, or maybe a modern open hardware movement that is building niche support.

I guess it really is just the PC.

Assuming this trend continues, I think people are going to start re-using older hardware rather than turning to server-grade hardware (which is often not convenient for the average residential situation).

At least, that's what I hope happens. What will probably happen is people will continue to migrate away from the PC platform and towards closed platforms for the convenience, if history is any indication.

I think this is already happening, sort of. At least, people are hanging onto their older-but-not-yet-old components for much longer than they used to. I recently tried to build a NAS from eBay parts, and I was surprised to find that the newest stuff affordably available was 6th/7th generation Intel Core parts (retailed 2016/2017). I think people are trying to offload these CPUs in particular because they can't run an unmodified Windows 11 installation (no firmware TPM 2.0 implementation, and the corresponding consumer motherboards typically didn't have a discrete TPM module, either, if they had an LPC bus connector at all). Very little (reasonably-priced) availability of similar-aged Ryzen CPUs (which have firmware TPM support) or newer Intel CPUs.

That's what I've been doing for years. I buy (or get for free) enterprise PCs coming off lifecycle at surplus sales. Nothing I do at home needs a cutting edge CPU. Unless you're a hard-core gamer or serious hobbiest/tinkerer a 5 year old or even older PC running linux is very adequate.

Why would most people need a home server rack? That's a lot of noise, space, and electrical usage. For what most people would need a home "server" for a NUC PC or Mac Mini would do the job.

Ziply Fiber is offering 50 gbps home internet connections in some US locations. You cannot utilize that type of speed with a Mac Mini. Even the modest 8-10gbps connections offered by T-Mobile and Google probably require more.

Doesn't really answer the question though, why would someone be trying to utilize that much bandwidth out of their house?

Is this for people trying to start the next netflix out of their garage before they have any money to put the servers in a colo?

VPNs. If you have a NAS and require high-speed access from/to your home files (dumping your Apple ProRes RAW rushes off your external SSD, so you can keep shooting your video, for instance), that kind of bandwidth cements your income.

You and 49 other people all simultaneously working from locations with gigabit uplinks

> You cannot utilize that type of speed with a Mac Mini.

Mostly because the base Mini has Thunderbolt 4 which maxes out at 40Gbps. Anything with a PCIe 4.0 x16 slot will take a 100Gbps NIC. 100Gbps is around 10GBps (8 bits per byte plus encapsulation overhead). Desktop CPUs can do AES-GCM at 2.5GBps+ per core and have up to 16 cores and around 50GBps of memory bandwidth (dual channel DDR4-3200), so the NIC still seems like the bottleneck.

Why would most people need a NUC PC or Mac Mini when a pencil would do the job?

Degrowthing is dumb, people will find use if they have more.

Contrary take: I believe we will see an expanded market for capable PCs that can be sanely put in a living space. By extension of the gaming PC niche to local AI. Both NVidia and AMD are developing product lines in that direction (DGX Spark, Ryzen AI Max). And Linux will be more prominent than ever, due to several independent reasons: MS dropping the ball hard on Windows, SteamOS making Linux attractive for gamers, 'digital sovereignty' as a trend, and Linux being the de facto standard for hosting AI (or anything really).

Great take, but if the market is expanding for capable PCs why are motherboard sales decreasing?

According to the article, because components are really expensive right now, particularly RAM and storage.

Well, the two chips I mentioned (DGX Spark uses the GB-10) are both a SoC, so no motherboard needed there. I don't know if that's the full explanation, but it could be a factor.

The SoC design with unified memory is generally well suited for residential use because it's quite energy-efficient, quiet and small (compared to traditional GPU-powered gaming rigs). Great performance-per-annoyance, so to say.

Mini PCs (NUC-ish form factor) are selling a lot now too, small, quiet, most people don't need expansion over what you can get from eg USB4.

For most people, I’d recommend a NuC or a NAS with an unlocked bootloader (so you can put Linux on) for a home server.

Most home users need a small amount of compute, and are sensitive to noise and power use.

[dead]

This will surely bring new energy into opening these platforms, as it did in days before

why?

I'm interested to know, WHY is PC so open? what led to that?

Many vendors, because that means you need specs and that in turn allows for interoperability

You might be interested in the IBM PC compatible and Wintel wikipedia pages. This is a super high level timeline, but it is more interesting to get into the detail.

At a high level, the IBM PC platform were very well documented & sold well, to the effect of producing tons of software and peripherals add-ons ("PC Compatibles"). This led some other computer companies to reverse engineer the proprietary IBM BIOS, allowing them to run the same software and use the same peripherals. Because these were clean room reimplementations, IBM didn't have a legal case to prevent their sale.

Fast forward a bit, IBM's attempt at a new, closed platform, PS/2, flopped. People wanted their more open hardware. Windows became dominate enough that all the demand was for x86 based hardware that could run Windows. Microsoft was happy to work with many vendors.

The PC is very open today, but Apple survived. Atari ST and Amiga probably survived longer than you think as well.

Agreement IBM had to make with the DoJ/etc in the 80s to open the PC platform to avoid antitrust prosecution. That was the key event.

I would argue that the key event was Columbia Data Products’s clean room implementation of the BIOS.

https://en.wikipedia.org/wiki/Columbia_Data_Products

That, and I’m pretty sure the DOJ had ended the antitrust suit (which was about bundling) by the time the PC was released.

Because Microsoft commodified their complement in the 1980s to break the back of IBM.

Wouldn’t recommend a home server rack in an apartment. For high wife approval factor, you can put Epyc hardware with Noctuas in a bigger case. I’ve got one at home. Runs my blog and a bunch of other things. Home is at 32 dB right now.

Realistically a Mac Mini will probably blow a lot of things out of the water on price / performance. Even an older one.

The problem with all those devices you listed is that they have lost the "general purpose" ability. I guess you could define "general" to mean "carefully curated"...

> ... or they are pushed towards server grade components. I already have home sever rack, and would recommend it for other people.

An actual rack with noisy 1U or 2U servers may be a bit overkill but on the plus side there's a guaranteed endless supply of such used servers.

Now there's a happy middle ground: used workstations with ECC memory, that you then use as servers.

People would be really wise to not underestimate what a 12 years old dual-Xeon, 14 cores each, 56 threads in total can do, for example. And such a complete workstation can basically be found for less than what it takes to fill my car's gas tank (granted it's got a big tank and it's fancy car whose manufacturer recommends to only use 98+ octane).

A single Xeon workstation with shitload of memory in a tower form factor is basically silent. Mine is. Dead quiet, next to the vaccuum cleaner and the cat's foot in a tiny room. I use it as a headless server.

And that's with the default PSU and fans. There are, of course, people modding these with adapters for regular consumer PSUs and then putting ultra-quiet PSUs in those. Same with Noctua fans etc.

And as for the usual complain: "but a server that is on 24/7 consumes too much electricity"... I only turn on my servers at home when I begin to work: I don't need these to be on 24/7.

So yeah: "Server CPU + ECC" doesn't imply noise. And "Server CPU + ECC" doesn't imply it has to be on 24/7 neither.

I recommend this too!

I like my Dell Precision T7910 (dual-socket Xeon FTW) a lot.

What are you using?

Buy steam deck, and steam box.

And flying pugs gonna fall from teh sky too.

> While other platforms like Android and becoming less open

ok....

> PC in general is becoming more open than it's been in a long time as heavy MacOS/Android/iOS competition is creating a focus on open standards ...

I'm so confused by what you're trying to say here.