It found problems in the tree - lost files, wrong node counts, other stuff - which led to me finding files that didn't match previous backups (and when opened were obviously corrupted, like the bottom half of an image being just noise). Once I found this was a problem I've also caught ones that couldn't be read (IOError) that fsck would delete on the next run.
I may not have noticed had fsck not alerted me something was wrong.
But metadata is data too, right? I guess the next question is, would it be possible for parts of the FS metadata to remain untouched for a time long enough for the SSD data corruption process to occur.
It found problems in the tree - lost files, wrong node counts, other stuff - which led to me finding files that didn't match previous backups (and when opened were obviously corrupted, like the bottom half of an image being just noise). Once I found this was a problem I've also caught ones that couldn't be read (IOError) that fsck would delete on the next run.
I may not have noticed had fsck not alerted me something was wrong.
But metadata is data too, right? I guess the next question is, would it be possible for parts of the FS metadata to remain untouched for a time long enough for the SSD data corruption process to occur.
A ZFS scrub (default scheduled monthly) will do it.