Wouldn't creating an archive on the filesystem and then deleting the archive cause more writes than just creating it without a delete?
Wouldn't creating an archive on the filesystem and then deleting the archive cause more writes than just creating it without a delete?
Not in the big picture.
Fundamentally, flash memory is a bunch of pages. Each page can be read an infinite number of times but there are quite relevant limits on how many times you can write it.
In the simplistic system lets say you have 1000 pages, 999 hold static data and the last one keeps getting a temporary file that is then erased. All wear occurs on page 1000 and it doesn't last very long.
In the better system it notes that page 1000 is accumulating a lot of writes and picks whatever page has the least writes, copies the data from that page to page 1000 and now uses the new page for all those writes. Repeat until everything's worn down. Note the extra write incurred copying the page over.
In the real world a drive with more space on it is less likely to have to resort to copying pages.
Wear increases when free space is low as there's less overall space to put the data. If you only have 500MB of free space, those blocks take the majority of write hammering until the chip fails. If there's 5000MB free, you can spread the wear.
I think the goal is to save as much as you can in the interim. Holding onto X bytes of archives is more time worth of data than X bytes of uncompressed. We do that stuff all the time in finance. Stuff gets spewed off to external places and local copies get kept but archived and we simply rotate the oldest stuff out as we go. If the cleanup process is configured separately from the archiving process you can absolutely archive things just to remove them shortly thereafter.