I just learned about the whole homelab thing a week ago; it's a much deeper rabbit hole than I expected. I'm planning to setup ProxMox today for the first time in fact and retire my Ubuntu Server setup running on a NUC that's been serving me well for last couple years.

I hadn't heard about mealie yet, but sounds like a great one to install.

Ubuntu Server setup running on a NUC that's been serving me well

In my book, that’s a homelab, it's just a small one (an efficient one?...)

I have Proxmox running on top of a clean Debian install on my NUC, I wanted to allow Plex to use the hardware decoding and it got a bit funny trying to do that with Plex running in a VM, so it runs on the host and I use VMs for other stuff

I have an Intel (12th Gen i5-12450H) mini-pc and at first had issues getting the GPU firmware loaded and working in Debian 12. However upgrading to Debian 13 (trixie) and doing apt update and upgrade resolved the issue and was able to pass the onboard Intel GPU through Docker to a Jellyfin container just fine. I believe the issue is related to older linux kernels and GPU firmware compatibility. Perhaps that’s your issue.

If you want to go another, related rabbit hole, check out the DataHoarder subreddit. But don't blame me, if you’re buying terabytes of storage over the next few months :)

Data Hoarding is a bit more involved than just a homelab. Don't want your data hoard to go down or missing, whole you're labbing new techs and protocols.

don't blame me if you’re buying terabytes of USB drives and pulling out the hard drives

A Few Moments Later

There is time dialation in the homelab vortex ... what feels like a few hours can turn out to be years in the real world.

That’s precisely what I meant! I’m at my sixth year, I guess. Maybe longer, I’ve lost my count.

Can somebody explain the whole proxmox thing? I haven’t used it, I use k3s.

I don’t get why people use VMs for stuff when there’s docker.

Thanks!

Maybe my use case is abnormal, but I allocate the majority of my resources to a primary VM where I run everything, including containers, etc. but by running Proxmox now I can backup my entire server and even transfer it across the network. If I ever have some software to try out, I can do it in a new VM rather than on my main host. I can also ‘reboot’ my ‘server’ without actually rebooting the real computer, which meant less fan noises and interruption back when I used an actual rack mounted server at home.

Primarily, docker isn't isolation. Where isolation is important, VMs are just better.

Outside of that:

Docker & k8s are great for sharing resources, VMs allow you to explicitly not share resources.

VMs can be simpler to backup, restore, migrate.

Some software only runs in VMs.

Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.

For my setup, I have a handful of VMs and dozens of containers. I have a proxmox cluster with the VMs, and some of the VMs are Talos nodes, which is my K8s cluster, which has my containers. Separately I have a zimaboard with the pfsense & reverse proxy for my cluster, and another machine with pfsense as my home router.

My primary networking is done on dedicated boxes for isolation (not performance).

My VMs run: plex, home assistant, my backup orchestrator, and a few windows test hosts. This is because:

- The windows test hosts don't containerise well; I'd rather containerise them. - plex has a dedicated network port and storage device, which is simpler to set up this way. - Home Assistant uses a specific USB port & device, which is simpler to set up this way. - I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s. These are the services where small transient or temporary issues would impact the whole household.

Also note, I don't use the proxmox container support (I use talos) for two reasons. 1 - I prefer k8s to manage services. 2 - the isolation boundary is better.

Proxmox is essentially a clustered hypervisor (a KVM wrapper, really). It has some overlap with K8s (via containers), but is simpler for what I do and has some very nice features, namely around backups, redundancy/HA, hardware passthrough, and the fact that it has a usable GUI.

I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.

Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.

Not everything I put in a VM runs on Linux.

personally : proxmox /VM is great if You'd like to separate physical HW. In my case - virtualized TrueNAS means I can give it a whole SATA controller and keep this as an isolated storage machine.

Whatever uses that storage usually runs in a Docker inside an LXC container.

If I need something more isolated (think public facing cloudflare) - that's a separate docker in another network routed through another OPNSense VM.

Desktop - VM where I passed down a whole GPU and a USB hub.

Best part - it all runs on a fairly low power HW (<20W idle NAS plus whatever the harddrives take - generally ~5W / HDD).

it's so you can have a machine to run docker on, basically.

especially useful if you want multiple of those, and also helpful if you don't want one of them anymore.

Makes backups of the KVM VM running docker easy too right

and you can move the whole vm to a different host approximately trivially

Not all things run as containers.

[deleted]

VMs can though =D

[dead]

My most recent learning - DDR4 ECC UDIMMs are comically expensive. To the point where I considered just replacing the entire platform with something RDIMM rather than swapping to ECC sticks.

>No space left on device.

>In other words, you can lock yourself out of PBS. That’s… a design.

Run PBS in LXC with the base on a zfs dataset with dedup & compression turned off. If it bombs you can increase disk size in proxmox & reboot it. Unlike VMs you don't need to do anything inside the container to resize FS so this generally works as fix.

>PiHole

AGH is worth considering because it has built in DoH

>Raspberry Pi 5, ARM64 Proxmox

Interesting. I'm leaning more towards k8s for integrating pis meaningfully

> My most recent learning - DDR4 ECC UDIMMs are comically expensive. To the point where I considered just replacing the entire platform with something RDIMM rather than swapping to ECC sticks.

DDR4 anything is becoming very expensive right now because manufacturers have been switching over to DDR5.

Yeah, built on AM4 and in hindsight spending more on mobo & CPU to hop on AM5 would have been the smart move. Live & learn.

On the plus side I have a lot of non-ECC DDR4 sticks that I'm dumping into the expensive market rn

>AGH is worth considering because it has built in DoH

Technitium has all the bells and whistles along with being cross platform.

https://technitium.com/dns/

I went in thinking that maybe there's something to learn for my grand total of 1 ThinkCentre M910q "homelab", but this author's setup is on another league, I'm sure closer (or surpassing) the needs of a small/medium company!

Here is an actual another league if you are curious about it https://youtu.be/-b3t37SIyBs

It’s another league, but I don’t get the point of mixing enterprise rack-mounts with Raspberry Pis.

You'd be delighted (or terrified) to know that I just added an old gaming computer in a 4U case to the cluster, so I can play with PCI/GPU passthrough.

The Dell is essentially the main machine that runs everything we actually use - the other hardware is either used as redundancy or for experiments (or both). I got the Pi from a work thing and this has been a fun use case. Not that I necessarily recommend it...

I second the shout out for Mealie, it's very useful. Importing from URLs works very well, and it gives you a centralised place for all your recipes, without ads or filler content and safe from linkrot.

One of my favorite CyberPower perks is their RMCARDs for network monitoring: It's a separate module that works in basically all of their rackmount UPSes. You can replace the entire UPS without having to pay for the little mini web server again, it'll just pop right into the new unit.

It's a neat chip but I couldn't bring myself to spend in excess of the price of the UPS ($439.00 for RMCARD at time of writing). I ended up hooking my NAS via USB to my existing home server via NUT and it's been working well.

It is a little hefty for a homelab-level setup, but the impressive bit to me is that they've kept compatibility with it longer than we've replaced UPSes at work (it looks like the RMCARD205 and 305 were introduced in 2018), so instead of paying for that hardware built into each unit, the RMCARD has been a one-time purchase we can bring from unit to unit.

Good reminder for me to set up a UPS for my home lab before I go on vacation. . .

Not sure I understand the need to use a Raspberry Pi here. They're cool and all, but wouldn't a regular old PC be simpler to setup, maintain, and attach hardware to? It's a hobby--and you can do whatever you want, but I wouldn't involve a Pi in a home server setup unless I specifically needed something it bought me, like the small form factor, low power usage, GPIO pins and so on.

I always need lower power consumption. I'm in the UK and my power costs are $0.40 per kWh. Even running a raspberry pi 5 24/7 would cost me $25 per year

N100 minipcs will burn about as much power as a fully decked rpi5 and they're so much more hypervisor-friendly.

Look for used thin clients. You can get HP t630s for ~$50 or less. They have a nice AMD SoC. If mostly idle, they draw about 2x a Pi. Loaded, they are similar.

I just commented on this above, but I actually got for the Pi for free and it's a very capable device. I wouldn't buy one for this use case (nor do I really recommend it, but it _does_ work).

[dead]

I've recently learned that "homelab" is a specific thing meaning you run certain software (like Proxmox), and not a generic term for running a 'server lab' at home.

Most “homelabs” are built by a developer LARPing as a sysadmin, with a user population of one (themselves) or zero for most of the features.

It’s the SUV that has off-road tires but never leaves the pavement, the beginner guitarist with an arena-ready amp, the occasional cook with a $5k knife. No judgment, everyone should do what they want, but the discussions get very serious even though the stakes are low.

some people think it's not "homelabbing" unless you're doing things the way it's done at large scale. i think these people are aiming to enter IT as a career and consider a homelab to be a resume project.

but proxmox and kubernetes are overkill, imo, for most homelab setups. setting them up is a good learning experience but not necessarily an appropriate architecture for maintaining a few mini PCs in a closet long term.

you can ignore the gatekeeping.

Homelabbing is a hobby for most people involved in it, and like other hobbies, some people dip their toes in it while others go diving in the deep end. But would you say it’s “overkill” for a hobbyist fisher to have multiple fishing poles? Or for a hobbyist painter to try multiple sets of paintbrushes? Or a hobbyist programmer to know multiple programming languages?

There’s a lot of overlap between “I run a server to store my photos” and “I run a bunch of servers for fun”, which has resulted in annoying gatekeeping (or reverse gatekeeping) where people tell each other they are “doing it wrong”, but on Reddit at least it’s somewhat being self-organized into r/selfhosted and r/homelab, respectively.

> i think these people are aiming to enter IT as a career and consider a homelab to be a resume project.

It's funny. I did this (before it really became a more mainstream hobby, this was early 00s), but now that I work in ops I barely even want to touch a computer after work.

k8s is definitely an overkill if your goal is not learning k8s.

proxmox is great, though. It's worth running it even if you treat it as nothing more than a BMC.

I'm running an ubuntu server as a hypervisor only because the proxmox installer is using an older kernel than the actual system and wouldn't install on my box :/

Where’d we get this term? I hear “home lab” and I think of having equipment to accomplish something new, not… running ordinary server software in fairly ordinary ways. Like Tony Stark designing his suits has a “home lab”. People 3D printing Warhammer figures or with a couple little servers running PiHole and Wireguard and such… not so much?

I’ve had one or two machines running serving stuff at home for a couple decades [edit: oh god, closer to 2.5 decades…], including serving public web sites for a while, and at no point would I have thought the term “home lab” was a good label for what I was doing.

Run whatever you like!

I personally enjoy the big machines (I've also always enjoyed meaninglessly large benchmark numbers on gaming hardware) and enterprise features, redundancy etc. (in other words, over-engineering).

I know others really enjoy playing with K8s, which is its own rabbit hole.

My main goal - apart from the truly useful core services - is to learn something new. Sometimes it's applicable to work (I am indeed an SWE larping as a sysadmin, as another commenter called out :-) ), sometimes it's not.

You can run whatever. You don't need specific software

Wait, what?

Surely people have had ‘homelabs’ for longer than VMs and container have been mainstream?