There are a couple recent developments in ZFS dedup that help to partially mitigate the memory issue: fast dedup and the ability to use a special vdev to hold the dedup table if it spills out of RAM.

But yes, there's almost no instance where home users should enable it. Even the traditional 5gb/1tb rule can fall over completely on systems with a lot of small files.

Nice. I was hoping a vdev for the dedup table would come along. I've wanted to use optane for the dedup table and see how it performs.

I think the asterisk there is that the special vdev requires redundancy and becomes a mandatory part of your pool.

Some ZFS discussions suggest that an L2ARC vdev can cache the DDT. Do you know if this is correct?

Yes, that's the reason why a dedup vdev that has lower redundancy than your main pool will fail with "mismatched replication level" unless you use the -f (force) flag.

I'm not sure about whether an L2ARC vdev can offload the DDT, but my guess is no given the built-in logic warning against mismatched replication levels.

Well, the warning makes sense with respect to the dedup vdev since the DDT would actually be stored there. On the other hand, the L2ARC would simply serve as a read cache, similar to the DDT residing in RAM.