A write-ahead log isn't a performance tool to batch changes, it's a tool to get durability of random writes. You write your intended changes to the log, fsync it (which means you get a 4k write), then make the actual changes on disk just as if you didn't have a WAL.

If you want to get some sort of sub-block batching, you need a structure that isn't random in the first place, for instance an LSM (where you write all of your changes sequentially to a log and then do compaction later)—and then solve your durability in some other way.

> A write-ahead log isn't a performance tool to batch changes, it's a tool to get durability of random writes.

¿Por qué no los dos?

Because it is in addition to your writes, not instead of them. That's what “ahead” points to.

The actual writes don’t need to be persisted on transaction commit, only the WAL. In most DBs the actual writes won’t be persisted until the written page is evicted from the page cache. In this sense, writing WAL generally does provide better perf than synchronously doing a random page write

Look up how "checkpointing" works in Postgres.

I know how checkpointing works in Postgres (which isn't very different from how it works in most other redo-log implementations). It still does not change that you need to actually update the heap at some point.

Postgres allows a group commit to try to combine multiple transactions to avoid the multiple fsyncs, but it adds delay and is off by default. And even so, it reduces fsyncs, not writes.

But it turns those multiplied writes into two more sequential streams of writes. Yeah, it duplicates things, but the purpose is to allow as much sequential IO as possible (along with the other benefits and tradeoffs).

you can unify database with write-ahead log using a persistent data structure. It also gives you cheap/free snapshots/checkpoints.