I disagree, strongly. Here are the relevant docs:
https://restic.readthedocs.io/en/stable/030_preparing_a_new_...
I would like to see an explicit discussion of what permissions are needed for what operation. I would also like to see a clearly specified model in which backups can be created in a bucket with less than full permissions and, even after active attack by an agent with those same permissions, one can enumerate all valid backups in the bucket and be guaranteed to be able to correctly restore any backup as long as one can figure out which backup one wants to restore.
Instead there are random guides on medium.com describing a configuration that may or may not have the desired effect.
Again, this isn’t at all in the scope of restic’s docs. If you’re using S3 as the storage, it’s on you to understand how S3 works and what permissions are needed, just like it’s on you to understand how your local file system works and file permissions work if you use the local file system as a backend.
If you don’t understand S3 or don’t want to learn, then that’s fine, and you can pay the premium to tarsnap for simplifying it for you. But that’s your choice, not an issue with restic.
If you think differently, have you submitted a PR to restic’s docs to add the information you think should be there?
Interesting play on the debate- but after the response to restic's original decision to upstream Object Store permissions and features... to the Object Store, along with my attempts to explain S3 to several otherwise reasonably technical people....
I think people are frequently trapped in some way of thinking (not sure exactly) that doesn't allow them to think of storage as anything other than Block based. They repeatedly try to reduce S3 to LBA's, or POSIX permissions (not even modern ACL type permissions), or some other comparison that falls apart quickly.
Best I've come up with is "an object is a burned CD-R." Even that falls apart though
I still completely disagree. It’s on me to understand IAM. It should not be on me to understand the way that restic uses S3 such that I can determine whether I can credibly restore from an S3 bucket after a compromised client gets permission to create objects that didn’t previously exist. Or to create new corrupt versions of existing objects.
For that matter, suppose an attacker modifies an object and replaces it with corrupt or malicious contents, and I detect it, and the previous version still exists. Can the restic client, as written, actually manage the process of restoring it? I do not want to need to patch the client as part of my recovery plan.
(Compare to Tarsnap. By all accounts, if you backup up, your data is there. But there are more than enough reports of people who are unable to usefully recover the data because the client is unbelievably slow. The restore tool needs to do what the user needs it to do in order for the backup to be genuinely useful.)
I think you two may be talking past each other a bit here. Bear in mind I am not a security expert, just a spirited hobbyist; I may be missing something. As stated in my digital resilience audit, I actually use both Tarsnap and restic for different use cases. That said:
Tarsnap's deduplication works on the archive level, not on the particular files etc within the archive. Someone can set up a write-only Tarsnap key and trust the deduplication to work. A compromised machine with a write-only Tarsnap key can't delete Tarsnap archive blobs, it can only keep writing new archive blobs to try to bleed your account dry (which, ironically, the low sync rate helps protect against - not a defense for it, just a funny coincidence).
restic by contrast does do its dedupe at the file level, and what's more it seems to handle its own locks within its own files. Upon starting a backup, I observe restic first creates a lock and uploads it to my S3 compatible backend - my general purpose backups actually use Backblaze B2, not AWS S3 proper, caveat emptor. Then restic later attempts to delete that lock and syncs that change too to my S3 backend. That would require a restic key to have both write access and some kind of delete access to the S3 backend, at a minimum, which is not ideal for ransomware protection.
Many S3 backends including B2 have some kind of bucket-level object lock which prevent the modification/deletion of objects within that bucket for, say, their first 30 days. But this doesn't save us from ransomware either, because restic's own synced lock gets that 30 day protection too.
I can see why one would think you can't get around this without restic itself having something to say about it. Gemini tells me that S3 proper does let you set delete permissions at a granular enough level that you can tell it to only allow delete on locks/, with something like
But, I have not tested this myself, and this isn't necessarily true across S3 compatible providers. I don't know how to get this level of granularity in Backblaze, for example, and that's unfortunate because B2 is about a quarter the cost of S3 for hot storage.The cleanest solution would probably be to have some way for restic to handle locks locally, so that locks never need to hit the S3 backend in the first place. I imagine restic's developers are already aware of that, so this seems likely to be a much harder problem to solve than it first appears. Another option may be to use a dedicated, restic-aware provider like BorgBase. It sounds like they handle their own disks, so they probably already have some kind of workaround in place for this. Of course, as others have mentioned, you may not get as many nines out of BB as you would out of one of the more established general-purpose providers.
P.S.: Thank you both immensely for this debate, it's helped me advance the state of my own understanding a little further.