A write-up of a new Gregorian date conversion algorithm.
It achieves a 30–40% speed improvement on x86-64 and ARM64 (Apple M4 Pro) by reversing the direction of the year count and reducing the operation count (4 multiplications instead of the usual 7+).
Paper-style explanation, benchmarks on multiple architectures, and full open-source C++ implementation.
Very cool algorithm and great write-up!
I was a bit confused initially about what your algorithm actually did, until I got to the pseudo-code. Ideally there would be a high level description of what the algorithm is supposed to do before that.
Something as simple as: “a date algorithm converts a number of days elapsed since the UNIX epoch (1970-01-01) to a Gregorian calendar date consisting of day, month, and year” would help readers understand what they're about to read.
How would this algorithm change on 16-bit or 8-bit devices? Or does some variety of the traditional naïve algorithm turn out to be optimal in that case? There's quite a bit of microcontroller software that might have to do date conversions, where performance might also matter. It's also worth exploring alternative epochs and how they would affect the calculation.
Very nice writeup!
> Years are calculated backwards
How did that insight come about?
Love the literate programming style explanation. Chapeau bas
Nicely done.