I really don't get why people like the Minisforum stuff over alternatives. I've unfortunately been given one, and honestly I'm really unimpressed between crap firmware, no real expandability and just all of the other compromises that come with buying Aliexpress hardware. For the same money you can either pick up a used entry model Dell/HP/Lenovo server (and if they're E3/W/other entry level Xeon, they're usually not terrible on power) or get a good ATX chassis and power it with some off lease Supermicro hardware. Then you don't need to compromise on things like OOB management, hot swap bays, a real SAS card, real 10G nics, ECC ram, etc. etc. Maybe people are just afraid of doing a little bit of putting hardware together? I've seen and have systems that use the above gear that have been going for well over a decade now, with basically no hiccups, and even the old Sandy Bridge era E3 stuff both punches above probably even RPi5 and N100 and doesn't draw more than 30-40W when you don't have spinning disks in there. I'm sure if you avoid AMD and go find a newer T variant Intel chip, you can both have your cake and eat it.
N100 is faster and more efficienct than any Ivy bridge E3. At idle the Xeon draws roughly 20W more, which works out to $30USD/year at the national average electricity prices. That gap widens as the load increases.
I can totally see why someone who doesnt need expandability would choose the cheap mini PC.
When I first got into homelabbing as a hobby, I built a massively overpowered server because I was highly ambitious.it mostly just drew power for projects that didn’t require all the horsepower.
A decade later, I like NUCs and Pis and the like because they’re tiny,low-power, and easy to hide. Then again, I don’t have nearly as much time and drive for offhand projects as I get older, so who knows what a younger me would have decided with the contemporary landscape of hardware available to us today.
A decently powerful Server is nice, when you need it. Having some modern APU for decent en- and decoding performance is great.
There are tasks that benefit from speed, but the most important thing is good idle performance. I don't want the noise, heat or electricity costs.
I'm reluctant to put a dedicated GPU into mine, because it would almost double the idle power consumption for something I would rarely use.
Even my old GTX 970 can throttle down to like 10W while still being able to display and iirc also h.264 decode 1080p60, let alone putting it in a mode that at all matches S3/suspend-to-ram via PCIe sleep states. I'm pretty sure laptop with extra dGPUs normalized aggressive sleep of the power gating kind for their GPUs to keep their impact on battery life negligible (beyond their weight otherwise being used for more battery) until you turn on an application that you set to run on the dGPU.
I just purchased a Minisforum mobo BD795i SE with a Ryzen 9 7945HX (16 core, 32 thread). Can’t beat the price to performance. Building a NAS / VM server with 5x 14TB Seagate Exos drives, 2TB NVME, 500GB boot SSD, and 96GB of DDR5 memory. I was able to buy all components including a 3u hotswap 5x drive caddy for less than $1,200 all in. Can’t really beat that.
For appliance-like quickly replaceable little servers like my firewall or other one off roles, they are ok for me, but to run my TrueNAS system (ZFS) I gotta run something with a Supermicro board and ECC. Mission critical workloads that need 24/7 uptime (homelab general purpose always-on server)!
I am running TrueNAS on it, honestly ECC is overblown out of proportion. This isn’t storing military state secrets.
I used to really like the minis but I had to basically e-waste two of them because the ethernet went bad (lightning strike i think) and there was really no way to replace them and the OS would crash from hardware issues from it.
FWIW if this keeps happening to you, you can get ethernet surge protectors. Or use a couple cheap media converters from copper to a fiber back to copper.
I recommend using optical networking if you are confident about the lightning strike Ethernet damage.
ECC capable hardware tend to be very power hungry.
That's just an artifact of Intel disabling ECC on consumer processors.
There's no reason for ECC to have significantly higher power consumption. It's just an additional memory chip per stick and a tiny bit of additional logic on CPU side to calculate ECC.
If power consumption is the target, ECC is not a problem. I know firsthand that even old Xeon D servers can hit 25W full system idle. On AMD side 4850G has 8 cores and can hit sub 25W full system idle as well.
My HP 800 mini idles at 3W
Not always, HP Microserver n54l had support.