Skipping flushing the local disk seems rather silly to me:

- A modern high end SSD commits faster than the one way time to anywhere much farther than a few miles away. (Do the math. A few tens of microseconds specified write latency is pretty common. NVDIMMs (a sadly dying technology) can do even better. The speed of light is only so fast.

- Unfortunate local correlated failures happen. IMO it’s quite nice to be able to boot up your machine / rack / datacenters and have your data there.

- Not everyone runs something on the scale of S3 or EBS. Those systems are awesome, but they are (a) exceedingly complex and (b) really very slow compared to SSDs. If I’m going to run an active/standby or active/active system with, say, two locations, I will flush to disk in both locations.

> Skipping flushing the local disk seems rather silly to me

It is. Coordinated failures shouldn't be a surprise these days. It's kind of sad to here that from an AWS engineer. Same data pattern fills the buffers and crashes multiple servers, while they were all "hoping" that others fsynced the data, but it turns out they all filled up and crashed. That's just one case there are others.

Durability always has an asterisk i.e. guaranteed up to N number of devices failing. Once that N is set, your durability is out the moment those N devices all fail together. Whether that N counts local disks or remote servers.

Interestingly, on bare metal or old-school VMs, durability of local storage was pretty good. If the rack power failed, your data was probably still there. Sure, maybe it was only 99% or 99.9%, but that’s not bad if power failures are rare.

AWS etc, in contrast, have really quite abysmal durability for local disk storage. If you want the performance and cost benefits of using local storage (as opposed to S3, EBS, etc), there are plenty of server failure scenarios where the probability that your data is still there hovers around 0%.

This is about not even trying durability before returning a result ("Commit-to-disk on a single system is [...] unnecessary") it's hoping that servers won't crash and restart together: some might fail but others will eventually commit. However that assumes a subset of random (uncoordinated) hardware failures, maybe a cosmic ray blasts the ssd controller. That's fine, but it fails to account for coordinated failure where, a particular workload leads to the same overflow scenario on all servers the same. They all acknowledge the writes to the client but then all crash and restart.

To some extent the only way around that is to use non-uniform hardware though.

Suppose you have each server commit the data "to disk" but it's really a RAID controller with a battery-backed write cache or enterprise SSD with a DRAM cache and an internal capacitor to flush the cache on power failure. If they're all the same model and you find a usage pattern that will crash the firmware before it does the write, you lose the data. It's little different than having the storage node do it. If the code has a bug and they all run the same code then they all run the same bug.

Yeah good point, at least if you wait till you get an acknowledgement for the fsync on N nodes it's already in an a lot better position. Maybe overkill but you can also read the back the data and reverify the checksum. But yeah in general you make a good point, I think that's why some folks deliberately use different drive models and/or raid controllers to avoid cases like that.

This is an aside, but has anyone tried NVDIMMs as the disk, behind in-package HBM for ram? No idea if it would be any good, just kind of a funny thought. It’s like everything shifted one slot closer to the cores, haha, nonvolatile memory where the RAM use to live, memory pretty close to the core.

I think this entire design approach is on its way out. It turns out that the DIMM protocol was very much designed for volatile RAM, and shoehorning anything else in involves changes through a bunch of the stack (CPU, memory controller, DIMMs), which are largely proprietary and were never intended to support complex devices from other vendors according to published standards. Sure, every CPU and motherboard attempts to work with every appropriately specced DIMM, but that doesn’t mean that the same “physical” bits land in the same DRAM cells if you change your motherboard. Beyond interoperability issues, the entire cache system on most CPUs was always built on the assumption that, if the power fails, the contents of memory do not need to retain any well-defined values. Intel had several false starts trying to build a reliable mechanism to flush writes all the way to the DIMM.

Instead the industry seems to be moving toward CXL for fancy memory-like-but-not-quite-memory. CXL is based on PCIe, and it doesn’t have these weird interoperability and caching issues. Flushing writes all the way to PCIe has never been much of a problem, since basically every PCI or PCIe device ever requires a mechanism by which host software can communicate all the way to the device without the IO getting stalled in some buffer on the way.

I think it is fair to argue that there is a strong correlation between criticality of data and network scale. Most small buisnesses don't need anything S3 scale, but they also don't need 24 hour uptime, and losing the most recent day of data is annoying rather than catastrophic, so they can probably get away without flushing but with daily asynchronous backups to a different machine and a 1 minute UPS to allow for safe storage in the event of a power outage.

Committing to NVMe drive properly is really costly. I'm talking using O_DIRECT | OSYNC or fsync here. Can be in the order of whole milliseconds, easily. And it is much worse if you are using cloud systems.

It is actually very cheap if done right. Enterprise SSDs have write-through caches, so an O_DIRECT|O_DSYNC write is sufficient, if you set things up so the filesystem doesn't have to also commit its own logs.

I just tested the mediocre enterprise nvme I have sitting on my desk (micron 7400 pro), it does over 30000 fsyncs per second (over a thunderbolt adapter to my laptop, even)

Another complexity here besides syncs per second is the size of the requests and duration of this test, since so many products will have faster cache/buffer layers which can be exhausted. The effect is similar whether this is a "non-volatile RAM" area on a traditional RAID controller, intermediate write zones in a complex SSD controller, or some logging/journaling layer on another volume storage abstraction like ZFS.

It is great as long as your actual workload fits, but misleading if a microbenchmark doesn't inform you of the knee in the curve where you exhaust the buffer and start observing the storage controller as it retires things from this buffer zone to the other long-term storage areas. There can also be far more variance in this state as it includes not just slower storage layers, but more bookkeeping or even garbage-collection functions.

If you tested this on macos, be careful. The fsync on it lies.

nope, linux python script that writes a little data and calls os.fsync

What's a little data?

In many situations, fsync flushes everything, including totally uncorrelated stuff that might be running on your system.

fsync on most OSes lie to some degree

Isn't that why a WAL exists, so you didn't actually need to do that with eg postgres and other rdbms?

You must still commit the WAL to disk, this is why the WAL exists it writes ahead to the log on durable storage. Its doesn't have to commit the main storage to disk only the WAL which is better since its just an append to end rather than placing correctly in the table storage which is slower.

You must have a single flushed write to disk to be durable, but it doesn't need the second write.