There's a good reason that gigabit ethernet is 1000MBit/s and that's because it was defined in decimal from the start. We had 1MBit/s, then 10MBit/s, then 100MBit/s then 1000MBit/s and now 10Gbit/s.

Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.

Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.

This has nothing to do with the 1024, it has todo with the 1200 and the multiples of it and the 14k and 28k modems where everyone just cut off the last some hundred bytes because you never reached that speed anyway.

> that's because it was defined in decimal from the start

I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.

Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.