24 TB is available on a single drive now, though.

I haven't kept up with spinning rust drives so I had to take a look. Seagate has a couple 30 tb models now, crazy. Lot of eggs in one basket...and through the same ol 6gbit sata interface these must be a nightmare to format or otherwise deal with. Impressive nonetheless

The new drives have dual actuators to improve performance:

https://www.seagate.com/au/en/innovation/multi-actuator-hard...

https://www.youtube.com/watch?v=5eUyerocA_g

I'm pretty sure that there were such drives more than twenty years ago (not popular though). I have to ask, what's the point today? The average latency goes down at most linearly with the number of actuators. One would need thousands to match SSDs. For anything but pure streaming (archiving), spinning rust seems questionable.

Edit: found it (or at least one) "MACH.2 is the world’s first multi-actuator hard drive technology, containing two independent actuators that transfer data concurrently."

World's first my ass. Seagate should know better, since it was them who acquired Connor Peripherals some thirty years ago. Connor's "Chinook" drives had two independent arms. https://en.wikipedia.org/wiki/Conner_Peripherals#/media/File...

Those HDDs, if single-actuator, spend around 2~4 MB of streaming potential per seek.

That means, if you access files of exactly that size you'd "only" half your iops.

HDDs are quite fine for data chunks in the megabytes.

>HDDs are quite fine for data chunks in the megabytes.

Exactly. SSD fanboys show me a similarly priced 30 TB SSD and we can discuss. A bit like internal combustion vs e=car - the new tech is in principle simpler and cheaper, in practice simpler and pricier, with the promise of "one day" - but I suppose LCDs were once in a similar place so it may be a matter of time

Btw, you can get refurbished ones for relatively cheap too. ~$350[0]. I wouldn't put that in an enterprise backup server, but pretty good deal for home storage if you're implementing raid and backups.

[0] https://www.ebay.com/itm/306235160058

Prices have soared recently because AI eats storage as well as GPU; but tracking the data hoarder sites can be worthwhile. Seagate sometimes has decent prices on new.

> Seagate sometimes has decent prices on new.

Make sure to check the "annual powered-on hours" entry in the spec sheet though, sometimes it can be significantly less than ~8766 hours.

Probably a good time to mention systemd automount. This will auto mount and unmount drives as needed. You save on your energy bill but the trade off is that first read takes longer as drives need to mount.

You need 2 files, the mount file and the automount file. Keep this or something similar as a skeleton file somewhere and copy over as needed

  # /etc/systemd/system/full-path-drive-name.mount
  [Unit]
  Description=Some description of drive to mount
  Documentation=man:systemd-mount(5) man:systemd.mount(5)

  [Mount]
  # Find with `lsblk -f`
  What=/dev/disk/by-uuid/1abc234d-5efg-hi6j-k7lm-no8p9qrs0ruv
  # See file naming scheme
  Where=/full/path/drive/name
  # https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/sect-using_the_mount_command-mounting-options#sect-Using_the_mount_Command-Mounting-Options
  Options=defaults,noatime
  # Fails if mounting takes longer than this (change as appropriate)
  TimeoutSec=1m

  [Install]
  # Defines when to load drive in bootup. See `man systemd.special`
  WantedBy=multi-user.target


  # /etc/systemd/system/full-path-drive-name.automount
  [Unit]
  Description=Automount system to complement systemd mount file
  Documentation=man:systemd.automount(5)
  Conflicts=umount.target
  Before=umount.target

  [Automount]
  Where=/full/path/drive/name
  # If not accessed for 15 minutes drive will spin down (change as appropriate)
  TimeoutIdleSec=15min

  [Install]
  WantedBy=local-fs.target

If you have one drive it feels like that, but if you throw 6+2 drives into a RAID6/raidz2. Sure, a full format can take 3 days (at 100 Megabytes/second sustained speed), but it's not like you are watching it. The real pain is fining viable backup options that don't cost an arm and a leg

If your drives are only managing 100MB/s then something is wrong, SATA 3 should be at least 500MB/s.

SATA 3 can move 500MB/s, but high-capacity drives typically can't. They are all below 300MB/s sustained even when shiny new. Look for example at the performance numbers quoted in these data sheets [1][2][3][4], all between 248 MiB/s and 272 MiB/s.

Now that's still a lot faster than 100MB/s. But I have a lot of recertified drives, and while some of them make the advertised numbers some of them have settled at 100MB/s. You could argue that is something wrong with them, but they are in a raid and I don't need them to be fast. That's what the SSD cache is for.

1: Page 3 https://www.seagate.com/content/dam/seagate/en/content-fragm...

2: Page 2 https://www.seagate.com/content/dam/seagate/en/content-fragm...

3: Page 2 https://www.westerndigital.com/content/dam/doc-library/en_us...

4: Page 7 https://www.seagate.com/content/dam/seagate/assets/products/...

Spinning rust drives tend to be much faster on the outer than the inner tracks.

I had a 12 disk striped raidz2 array comprised of wd gold drives that could push 10Gbit/s over the network, while scrubbing, while running 10 virtual machines, and still had plenty of IO to play with. /shrug

Unfortunately, I’m absolutely watching it :-(

It happens whenever there is a progress indicator. I get obsessed with monitoring and verifying.

Unlike typical raid-5/6 parity, zfs / raidz doesn't require a format/parity initialization that writes all blocks of the disk before the pool can be used. You just need to write labels (at start & end of each disk) which is an attempt to confirm that the disk is actually as big as claimed.

[deleted]

There are 36TB hard drives available.

There are 122TB SSD drives now, though.

Kioxia has a 245TB SSD.