I think you two may be talking past each other a bit here. Bear in mind I am not a security expert, just a spirited hobbyist; I may be missing something. As stated in my digital resilience audit, I actually use both Tarsnap and restic for different use cases. That said:
Tarsnap's deduplication works on the archive level, not on the particular files etc within the archive. Someone can set up a write-only Tarsnap key and trust the deduplication to work. A compromised machine with a write-only Tarsnap key can't delete Tarsnap archive blobs, it can only keep writing new archive blobs to try to bleed your account dry (which, ironically, the low sync rate helps protect against - not a defense for it, just a funny coincidence).
restic by contrast does do its dedupe at the file level, and what's more it seems to handle its own locks within its own files. Upon starting a backup, I observe restic first creates a lock and uploads it to my S3 compatible backend - my general purpose backups actually use Backblaze B2, not AWS S3 proper, caveat emptor. Then restic later attempts to delete that lock and syncs that change too to my S3 backend. That would require a restic key to have both write access and some kind of delete access to the S3 backend, at a minimum, which is not ideal for ransomware protection.
Many S3 backends including B2 have some kind of bucket-level object lock which prevent the modification/deletion of objects within that bucket for, say, their first 30 days. But this doesn't save us from ransomware either, because restic's own synced lock gets that 30 day protection too.
I can see why one would think you can't get around this without restic itself having something to say about it. Gemini tells me that S3 proper does let you set delete permissions at a granular enough level that you can tell it to only allow delete on locks/, with something like
# possible hallucination.
# someone good at s3 please verify
{
"Sid": "AllowDeleteLocksOnly",
"Effect": "Allow",
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::backup-bucket/locks/*"
}
But, I have not tested this myself, and this isn't necessarily true across S3 compatible providers. I don't know how to get this level of granularity in Backblaze, for example, and that's unfortunate because B2 is about a quarter the cost of S3 for hot storage.The cleanest solution would probably be to have some way for restic to handle locks locally, so that locks never need to hit the S3 backend in the first place. I imagine restic's developers are already aware of that, so this seems likely to be a much harder problem to solve than it first appears. Another option may be to use a dedicated, restic-aware provider like BorgBase. It sounds like they handle their own disks, so they probably already have some kind of workaround in place for this. Of course, as others have mentioned, you may not get as many nines out of BB as you would out of one of the more established general-purpose providers.
P.S.: Thank you both immensely for this debate, it's helped me advance the state of my own understanding a little further.