I had a computer architecture prof (a reasonably accomplished one, too) who thought that all CS units should be binary, e.g. Gigabit Ethernet should be 931Mbit/s, not 1000MBit/s.

I disagreed strongly - I think X-per-second should be decimal, to correspond to Hertz. But for quantity, binary seems better. (modern CS papers tend to use MiB, GiB etc. as abbreviations for the binary units)

Fun fact - for a long time consumer SSDs had roughly 7.37% over-provisioning, because that's what you get when you put X GB (binary) of raw flash into a box, and advertise it as X GB (decimal) of usable storage. (probably a bit less, as a few blocks of the X binary GB of flash would probably be DOA) With TLC, QLC, and SLC-mode caching in modern drives the numbers aren't as simple anymore, though.

This is the bit (sic) that drives me nuts.

RAM had binary sizing for perfectly practical reasons. Nothing else did (until SSDs inherited RAM's architecture).

We apply it to all the wrong things mostly because the first home computers had nothing but RAM, so binary sizing was the only explanation that was ever needed. And 50 years later we're sticking to that story.

RAM having binary sizing is a perfectly good reason for hard drives having binary sized sectors (more efficient swap, memory maps, etc), which in turn justifies all of hard disks being sized in binary.

Nope. The first home computers like the C64 had RAM and sectors on disc, which in case of the C64 means 256 bytes. And there it is again, the smaller base of 1024.

Just later, some marketing assholes thought they could better sell their hard drives when they lie about the size and weasel out of legal issues with redefining the units.

There's a good reason that gigabit ethernet is 1000MBit/s and that's because it was defined in decimal from the start. We had 1MBit/s, then 10MBit/s, then 100MBit/s then 1000MBit/s and now 10Gbit/s.

Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.

Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.

This has nothing to do with the 1024, it has todo with the 1200 and the multiples of it and the 14k and 28k modems where everyone just cut off the last some hundred bytes because you never reached that speed anyway.

> that's because it was defined in decimal from the start

I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.

Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.

Wirespeeds and bitrate and baud and all that stuff is vastly confusing when you start looking into it - because it's hard to even define what a "bit on the wire" is when everything has to be encoded in such a way that it can be decoded (specialized protocols can go FASTER than normal ones on the same wire and the same mechanism if they can guarantee certain things - like never having four zero bits in a row).

I can see a precision argument for binary represented frequencies. A systems programmer would value this. A musician would not.

musicians use numbering systems that are actually far more confused than anything discussed here. how many notes in an OCTave? "do re mi fa so la ti do" is eight, but that last do is part of the next octave, so an OCTave is 7 notes. (if we count transitions, same thing, starting at the first zero do, re is 1, ... again 7.

the same and even more confusion is engendered when talking about "fifths" etc.

The 7 note scale you suggest (do re mi fa so la ti do) is comprised of different intervals (2 2 1 2 2 2 1) in the 12-fold equal tempered scale. There are infinite ways of exploring an octave in music, but unfortunately listener demand for such exploration is near infinitesimal.

don't you mean 11-fold? ... oh wait, they aren't even consistent

They sum to 12

actually they multiply, 12th root of 2, to the 12th

You can blame the Romans for that, as they practiced inclusive counting. Their market days occurring once every 8 days were called nundinae, because the next market day was the ninth day from the previous one. (And by the same logic, Jesus rose from the dead on the third day.)

Musicians often use equal temperament, so they have their own numerical crimes to answer for.

Touché, appropriate to describe near compulsory equal temperament (ala MIDI) as a crime.

An even bigger problem is that networks are measured in bits while RAM and storage are in bytes. I'm sure this leads to plenty of confusion when people see a 120 meg download on their 1 gig network.

(The old excuse was that networks are serial but they haven't been serial for decades.)