What ever you have laying around is a great starting point.
It all comes down to what you want to spend vs what you want to host and how you want to host it.
You could build a raspberry pi docker swarm cluster and get very far. Heck, a single Pi 5 with 4gb of memory will get you on your way. Or you could use an old computer and get just as far. Or you could use a full blown rack mount server with a real IPMI. Or you could use a VPS and accomplish the same thing in the cloud.
> You could build a raspberry pi docker swarm cluster and get very far. Heck, a single Pi 5 with 4gb of memory will get you on your way.
No, you couldn't, and no, you wouldn't.
To build a swarm you need a lot of fiddling and tooling. Where are you keeping them? How are they all connected? What's the clustering software? How is this any better than an old PC with a few SSDs?
Raspberry Pi with any amount of RAM is an exercise in frustration: it's abysmally slow for any kind of work or experimentation.
Really, the only useful advice is to use an old PC or use a VPS or a dedicated server somewhere.
I also have a threadripper pro with tons of pcie lanes. Just wish there was an easier way to use newer datacenter flash and that it wasn't so expensive. Hoping those servers that hold 20 something u.2/3 drives start getting decommissioned soon as I hope my current batch of HDD's will be my last. Curious to know how you're using all those nvme drives?
Asus and Acer motherboards supports bifurcation on pcie slots. So for example you can enable this in BIOS and put Asus hyper extension card to put 4 nvme disk into pcie slot https://www.asus.com/support/faq/1037507/
There are other cards like that i.e. https://global.icydock.com/product_323.html
This one have better support for smaller in size disks, much easier to swap disk but costs like 4 times more.
I think it could put even more drives in my new case I.e. using pcie to u2 card and the using 8 drives bays. But this would cost me probably like 3 times more just for the bay with connectors. and I do not need that much space.
If this does the job for you sure. For me they were very pricey at the time comparing to my old PC Intel core i3 that I had already lying around. And power cost does not matter really in my case.
Yes, if this works sure why not. Few years back decent NUC cost was at least 1k$ dollars. And still it is quite small, so you cannot slam 8 ssds in there.
I did use my old PC and it was working very nice with 4 sata ssds, in raid 10.
And as I already said on other comment - in my case power does not matter much. Space too.
Why? I was running like 15 containers on a hardware with 32gb of ram. You could probably safely use disk swap as additional memory for less frequent used applications, though I did not check.
For my case, and my workload, the answer has always been "RAM is cheap, and swapping sucks" -- but there are folks using Rpi as a NAS platform so really... my anecdote actually sucks upon reflection and I'd retract it if I could.
For every clown like me with massive RAM in their colo'd box, there is someone doing better and more amazing things with an ESP32 and a few molecules of RAM :D
What ever you have laying around is a great starting point.
It all comes down to what you want to spend vs what you want to host and how you want to host it.
You could build a raspberry pi docker swarm cluster and get very far. Heck, a single Pi 5 with 4gb of memory will get you on your way. Or you could use an old computer and get just as far. Or you could use a full blown rack mount server with a real IPMI. Or you could use a VPS and accomplish the same thing in the cloud.
And what if I don't have anything lying around?
A mid-range gaming build without the GPU is capable of running a full saas stack for a small company let alone an individual.
https://pcmasterrace.org/builds
Depends on what you want to do with it. To start any old free PC that you can find online is going to work for experimenting with it.
If you plan to run 24/7, buy some proper fans and undervolt/set power limits to CPU and you will be saving with noise and electricity.
> You could build a raspberry pi docker swarm cluster and get very far. Heck, a single Pi 5 with 4gb of memory will get you on your way.
No, you couldn't, and no, you wouldn't.
To build a swarm you need a lot of fiddling and tooling. Where are you keeping them? How are they all connected? What's the clustering software? How is this any better than an old PC with a few SSDs?
Raspberry Pi with any amount of RAM is an exercise in frustration: it's abysmally slow for any kind of work or experimentation.
Really, the only useful advice is to use an old PC or use a VPS or a dedicated server somewhere.
I would not use nuc like this guy. Had one and it was slow and it have limited capacity.
Then I had my old PC and it was very good but I wanted more nvme disks and motherboard supported only 7.
Now I am migrating to threadripper which is a bit overkill but I will have ability to run 1 or two GPUs along with 23 nvme disks for example.
I also have a threadripper pro with tons of pcie lanes. Just wish there was an easier way to use newer datacenter flash and that it wasn't so expensive. Hoping those servers that hold 20 something u.2/3 drives start getting decommissioned soon as I hope my current batch of HDD's will be my last. Curious to know how you're using all those nvme drives?
Asus and Acer motherboards supports bifurcation on pcie slots. So for example you can enable this in BIOS and put Asus hyper extension card to put 4 nvme disk into pcie slot https://www.asus.com/support/faq/1037507/
There are other cards like that i.e. https://global.icydock.com/product_323.html This one have better support for smaller in size disks, much easier to swap disk but costs like 4 times more.
I think it could put even more drives in my new case I.e. using pcie to u2 card and the using 8 drives bays. But this would cost me probably like 3 times more just for the bay with connectors. and I do not need that much space.
https://global.icydock.com/product_363.html
If you like u2 drives then icydock provides solution for them too. Or if you want go cheaper there are other cards with slim-sas or mcio https://www.microsatacables.com/4-port-pcie-3-0-x16-to-u-2-s...
But u2 disks are at least 2 times more costly per GB. Like 40tb costs 10k$. This is too much IMO.
I'm your opposite :-)
Intel n100 with 32GB RAM and single big SSD here (but with daily backups).
Eats roughly 10 Watts and does the job.
If this does the job for you sure. For me they were very pricey at the time comparing to my old PC Intel core i3 that I had already lying around. And power cost does not matter really in my case.
I have two NUC’s (ryzen 7 and intel i5) they’re rock solid.
Yes, if this works sure why not. Few years back decent NUC cost was at least 1k$ dollars. And still it is quite small, so you cannot slam 8 ssds in there.
I did use my old PC and it was working very nice with 4 sata ssds, in raid 10.
And as I already said on other comment - in my case power does not matter much. Space too.
"More RAM than you think you'll need" -- particularly if you virtualize. :)
Why? I was running like 15 containers on a hardware with 32gb of ram. You could probably safely use disk swap as additional memory for less frequent used applications, though I did not check.
For my case, and my workload, the answer has always been "RAM is cheap, and swapping sucks" -- but there are folks using Rpi as a NAS platform so really... my anecdote actually sucks upon reflection and I'd retract it if I could.
For every clown like me with massive RAM in their colo'd box, there is someone doing better and more amazing things with an ESP32 and a few molecules of RAM :D
And things like "this is a rack you can use, it will not cost you a kidney, and it will not blow your eardrums out with noise"