> Without version numbers, it has to be backwards-compatible
If there’s one thing that mathematical notation is NOT, it’s backwards compatible. Fields happily reuse symbols from other fields with slightly or even completely different meanings.
https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo... has lots of examples, for example
÷ (division sign)
Widely used for denoting division in Anglophone countries, it is no longer in common use in mathematics and its use is "not recommended". In some countries, it can indicate subtraction.
~ (tilde)
1. Between two numbers, either it is used instead of ≈ to mean "approximatively equal", or it means "has the same order of magnitude as".
2. Denotes the asymptotic equivalence of two functions or sequences.
3. Often used for denoting other types of similarity, for example, matrix similarity or similarity of geometric shapes.
4. Standard notation for an equivalence relation.
5. In probability and statistics, may specify the probability distribution of a random variable. For example, X∼N(0,1) means that the distribution of the random variable X is standard normal.
6. Notation for proportionality. See also ∝ for a less ambiguous symbol.
Individual mathematicians even are known to have broken backwards compatibility. https://en.wikipedia.org/wiki/History_of_mathematical_notati...
* Euler used i to represent the square root of negative one (√-1) although he earlier used it as an infinite number*
Even simple definitions have changed over time, for example:
- how numbers are written
- is zero a number?
- is one a number?
- is one a prime number?
> Fields happily reuse symbols from other fields with slightly or even completely different meanings.
Symbol reuse doesn't imply a break in backwards compatibility. As you suggest with "other fields", context allows determining how the symbols are used. It is quite common in all types of languages to reuse symbols for different purposes, relying on context to identify what purpose is in force.
Backwards incompatibility tells that something from the past can no longer be used with modern methods. Mathematical notation from long ago doesn't much look like what we're familiar with today, but we can still make use of it. It wasn't rendered inoperable by modern notation.
> Mathematical notation from long ago doesn't much look like what we're familiar with today, but we can still make use of it.
But few modern mathematicians can understand it. Given enough data, they can figure out what it means, but that’s similar to (in this somewhat weak analogy) running code in an emulator.
What we can readily make use of are mathematical results from long ago.
> Given enough data, they can figure out what it means
Right, whereas something that isn't backwards compatible couldn't be figured out no matter how much data is given. Consider this line of Python:
There is no way you can know what the output should be. That is, unless we introduce synthetic context (i.e. a version number). Absent synthetic context we can reasonably assume that natural context is sufficient, and where natural context is sufficient, backwards compatibility is present.> What we can readily make use of are mathematical results from long ago.
To some degree, but mostly we've translated the old notation into modern notation for the sake of familiarity. And certainly a lot of programming that gets done is exactly that: Rewriting the exact same functionality in something more familiar.
But like mathematics, while there may have been a lot of churn in the olden days when nothing existed before it and everyone was trying to figure out what works, programming notation has largely settled on what is familiar with some reasonable stability and no doubt will only continue to find even greater stability and it matures in kind.