ZFS uses a large amount of ram, i think the old rule of thumb was 1GB ram per 1TB of storage

That's only for deduplication.

https://superuser.com/a/993019

I do like to deduplicate my BitTorrent downloads/seeding directory with my media directories so I can edit metadata to my heart's content while still seeding forever without having to incur 2x storage usage. I tune the `recordsize` to 1MiB so it has vastly fewer blocks to keep track of compared to the default 128K, at the cost of any modification wasting very slightly more space. Really not a big deal though when talking about multi-gibibyte media containers, multi-megapixel art embeds, etc.

Have you considered "reflinks"? Supported as of [OpenZFS 2.2](https://github.com/openzfs/zfs/pull/13392).

Haven't used them yet myself but seems like a nice use case for things like minor metadata changes to media files. The bulk of the file is shared and only the delta between the two are saved.

Neat; will look into this. My setup is several years older than this, predating even FreeBSD's move to OpenZFS, and I just haven't touched the config of it since then since it works flawlessly (and since I already bought the RAM lol)

cross-seed | cross-seed https://www.cross-seed.org/

I believe they are saying they literally edit the media files to add / change metadata. Cross-seeding is only possible if the files are kept the same.

ZFS also uses RAM for read through cache aka ARC. However, I’m not sure how noticeable the effect from increased RAM would be - I assume it mostly benefit for read patterns with high data reuse, which is not that common.

Huh. More than just the normal page cache on other filesystems?

Yes. Parent's comment matches everything I've heard. 32GB is a common recommendation for home lab setups. I run 32 in my TrueNAS builds (36TB and 60TB).

You can run it with much less. I don't recall the bare minimum but with a bit of tweaking 2GB should be plenty[1].

I recall reading some running it on a 512MB system, but that was a while ago so not sure if you can still go that low.

Performance can suffer though, for example low memory will limit the size of the transaction groups. So for decent performance you will want 8GB or more depending on workloads.

[1]: https://openzfs.github.io/openzfs-docs/Project%20and%20Commu...

ZFS will eat up as much RAM as you give it as it caches files in memory as accessed.

All filesystems do this (at least all modern ones, on linux)