I don't think of base 10 being meaningful in binary computers. Indexing 1k needs 10 bits regardless if you wanted 1000 or 1024, and the base 10 leaves some awkward holes.

In my mind base 10 only became relevant when disk drive manufacturers came up with disks with "weird" disk sizes (maybe they needed to reserve some space for internals, or it's just that the disk platters didn't like powers of two) and realised that a base 10 system gave them better looking marketing numbers. Who wants a 2.9TB drive when you can get a 3TB* drive for the same price?

> I don't think of base 10 being meaningful in binary computers.

They communicate via the network, right? And telephony has always been in base 10 bits as opposed to base two eight bit bytes IIUC. So these two schemes have always been in tension.

So at some point the Ki, Mi, etc prefixes were introduced along with b vs B suffixes and that solved the issue 3+ decades ago so why is this on the HN front page?!

A better question might be, why do we privilege the 8 bit byte? Shouldn't KiB officially have a subscript 8 on the end?

At the TB level, the difference is closer to 10%.

Three binary terabytes i.e. 3 * 2^40 is 3298534883328, or 298534883328 more bytes than 3 decimal terabytes. The latter is 298.5 decimal gigabytes, or 278 binary gigabytes.

Indeed, early hard drives had slightly more than even the binary size --- the famous 10MB IBM disk, for example, had 10653696 bytes, which was 167936 bytes more than 10MB --- more than an entire 160KB floppy's worth of data.

>I don't think of base 10 being meaningful in binary computers.

Okay, but what do you mean by “10”?