Not sure I read that formula right, but isn't the data loss limited to the last 100ms (worst case)?

I don't see many single user/desktop application use cases where that kind of time range would be risky.

Maybe if you used this as the main production database with thousands of concurrent sessions. But that doesn't seem to be their main use case, is it?

Do you see any single user/desktop application that needs the kind of speed boasted in the README? The one you are trading durability for?

When operations complete in 200ns instead of blocking for microseconds/milliseconds on fsync, you avoid thread pool exhaustion and connection queueing. Each sync operation blocks that thread until disk confirms - tying up memory, connection slots, and causing tail latency spikes.

With FeOxDB's write-behind approach:

  - Operations return immediately, threads stay available

  - Background workers batch writes, amortizing sync costs across many operations

  - Same hardware can handle 100x more concurrent requests

  - Lower cloud bills from needing fewer instances

For desktop apps, this means your KV store doesn't tie up threads that the UI needs. For servers, it means handling more users without scaling up.

The durability tradeoff makes sense when you realize most KV workloads are derived data that can be rebuilt. Why block threads and exhaust IOPS for fsync-level durability on data that doesn't need it?