> Had to install a daemon to power down the hard disks when not in use

It depends on the drives you buy, of course, but unnecessary spin-up/spin-down cycles will wear out drives faster. Many NAS drives will keep running intentionally for this reason. If the drives didn't spin down by themselves, it's possible they weren't designed to start/stop that often.

My personal NAS solution is a combination of Debian (for stability and unattended updates) with a motherboard that's in low-power mode, with ZFS + the usual software for accessing the NAS installed. I could write a script that reboots the system based on /var/run/reboot-required but a monthly scheduled reboot works fine too.

In my experience, this setup rarely requires any maintenance. It's quite boring for a homelab project. Once every few years I need to upgrade to a newer version of Debian (last time I went from 10 to 11 to 12 in one go) but even that isn't much of a spectacle if you don't mess with non-Debian package repositories.

I have basically the same setup using Arch, because I don't trust distributions that patch a lot, and btrfs, because I use disks of mismatched sizes.

I used to change the frequency governor for my cpu on my previous NAS but the default Linux setup (schedutil) is now perfectly adequate. Same with disk power, default is fine.

The whole thing just shugs along happily with basically no effort on my side apart from the occasional upgrade requiring manual intervention.

It certainly didn't require anything which I would consider non sense - and I have seen plenty of things I would call that in the past, looking at you OpenLDAP and PAM. Sure, you need to have a vague idea of what RAID is but then again building a NAS and expecting to need zero knowledge of storage seems extremely ambitious to me. Then again, I realise that what I consider basic knowledge taken for granted might not seem to be so basic someone else point of view.