Very sad that HDDs, SSDs, and RAM are all increasing in price now, but I just made a 4 x 24 TB ZFS pool with Seagate Barracudas on sale at $10 / TB [1]. This seems like a pretty decent price even though the Barracudas are rated for 2400 hours per year [2] but this is the same spec that the refurbished Exos drives are rated for.

By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).

Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.

[1] https://www.newegg.com/seagate-barracuda-st24000dm001-24tb-f...

[2] https://www.seagate.com/content/dam/seagate/en/content-fragm...

This is Black Friday pricing at least, if you're willing to shuck. Seagate drives are still sub-$10/TB which... a single 24-26TB is enough for all my photos (ever), media and some dataset backup for work. I'm planning to backup photos and other "glacier"-tier media like YouTube channels to BluRay (a disk or two per year). It's at the point where I'd rather just pay the money and forget about it for 5-10 years.

I built the case from Makerbeam and printed panels, an old Corsair SF600 and a 4 year old ITX system with one of Silverstone's backplanes. They make up to 5 drives in a 3x5-1/4 bay form factor. It's a little overpowered (a 5950X), but I also use it as a generic server at home and run a shared ZFS pool with 2x mirrored vdevs. Even with inefficient space it's more than I need. I put in a 1080ti for transcoding or odd jobs that need a little CUDA (like photo tagging). Runs ResNet50-class models easily enough. I also wondered about treating it as a single-node SLURM server.

>I just made a 4 x 24 TB ZFS pool

How much RAM did you install? Did you follow the 1GB per 1TB recommendation for ZFS? (i.e. 96GB of RAM)

That's only for ZFS deduplication which you should never enable unless you have very, very specific use cases.

For normal use, 2GB of RAM for that setup would be fine. But more RAM is more readily available cache, so more is better. It is certainly not even close to a requirement.

There is a lot of old, often repeated ZFS lore which has a kernel of truth but misleads people into thinking it's a requirement.

ECC is better, but not required. More RAM is better, not a requirement. L2ARC is better, not required.

There are a couple recent developments in ZFS dedup that help to partially mitigate the memory issue: fast dedup and the ability to use a special vdev to hold the dedup table if it spills out of RAM.

But yes, there's almost no instance where home users should enable it. Even the traditional 5gb/1tb rule can fall over completely on systems with a lot of small files.

I think the asterisk there is that the special vdev requires redundancy and becomes a mandatory part of your pool.

Some ZFS discussions suggest that an L2ARC vdev can cache the DDT. Do you know if this is correct?

Yes, that's the reason why a dedup vdev that has lower redundancy than your main pool will fail with "mismatched replication level" unless you use the -f (force) flag.

I'm not sure about whether an L2ARC vdev can offload the DDT, but my guess is no given the built-in logic warning against mismatched replication levels.

Well, the warning makes sense with respect to the dedup vdev since the DDT would actually be stored there. On the other hand, the L2ARC would simply serve as a read cache, similar to the DDT residing in RAM.

Nice. I was hoping a vdev for the dedup table would come along. I've wanted to use optane for the dedup table and see how it performs.

I think you should be fine with 64GB (4x16GB ECC), I have 8x10TB RAID-Z2 and it uses around 34GB.

I believe the default is that ZFS uses 50% of your RAM for caching

Some myths never die, I guess..

That was never a myth, was it? It was just sound advice that was repeated without the information about which specific use cases it applied to.

Are you running this in Raid-Z2?

I'm way to bothered by how long it would take to resilver the disks that size.

> $10 / TB

That's a remarkably good price. If I had $1.5k handy I'd be sorely tempted (even tho it's Seagate).

It's a good price but the Barracuda line isn't intended for NAS use so it's unclear how reliable they are. But it's still tempting to roll the dice given how expensive drive prices are right now.

I was tempted by 4 x 28TB (Recertified Seagate ST28000NM000C) but could not work out what I would use it for.

I've recently shucked some Seagate HAMR 26Tb drives hopefully they last

I won that lottery with 3x 26TB Exos shipped. I decided to try and get two more but they ended up being HAMR (returned). Then I managed to find two more earlier manufacturing dates in store stock at a somewhat-far Best Buy that I was driving past anyway.

It felt like an unnecessary purchase at the time (I'm still waiting to CAD a CPU cooler mounting solution for the build in a new case that has room for the drives). But it seems like that deal is going to be the high water mark for a few years, at least.

Not surprised by the fan, once I went noctua I didn’t go back.