That's only for ZFS deduplication which you should never enable unless you have very, very specific use cases.
For normal use, 2GB of RAM for that setup would be fine. But more RAM is more readily available cache, so more is better. It is certainly not even close to a requirement.
There is a lot of old, often repeated ZFS lore which has a kernel of truth but misleads people into thinking it's a requirement.
ECC is better, but not required. More RAM is better, not a requirement. L2ARC is better, not required.
There are a couple recent developments in ZFS dedup that help to partially mitigate the memory issue: fast dedup and the ability to use a special vdev to hold the dedup table if it spills out of RAM.
But yes, there's almost no instance where home users should enable it. Even the traditional 5gb/1tb rule can fall over completely on systems with a lot of small files.
I think the asterisk there is that the special vdev requires redundancy and becomes a mandatory part of your pool.
Some ZFS discussions suggest that an L2ARC vdev can cache the DDT. Do you know if this is correct?
Yes, that's the reason why a dedup vdev that has lower redundancy than your main pool will fail with "mismatched replication level" unless you use the -f (force) flag.
I'm not sure about whether an L2ARC vdev can offload the DDT, but my guess is no given the built-in logic warning against mismatched replication levels.
Well, the warning makes sense with respect to the dedup vdev since the DDT would actually be stored there. On the other hand, the L2ARC would simply serve as a read cache, similar to the DDT residing in RAM.
Nice. I was hoping a vdev for the dedup table would come along. I've wanted to use optane for the dedup table and see how it performs.