Is it correct that you have zero data redundancy? This may work for you if you're just hoarding videos from YouTube, but not for most people who require an assurance that their data is safe. Even for you, it may hurt proper benchmarking, reproducibility, and multi-iteration training if the parent source disappears.

Definitely much less redundancy, this was definitely a tradeoff we made for pretraining data and cost.

Did you do any kind of redundancy at least (eg: putting every 10 disks in RAID 5 or RAID Z1)? Or I suppose your training application doesn't mind if you shed a few terabytes of data every so often?

atm we don't and we're a bit unsure whether it's a free lunch wrt adding complexity. there's a really nice property of having isolated hard drives where you can take any individual one and `sudo mount` it and you have a nice chunk of training data, and that's something anyone can feel comfortable touching without any onboarding to some software stack

I wonder if snapraid would work for this. Especially if your data is mostly written once and then just read, it could be an easy way to add redundancy while keeping isolated individual drives.