derivation of -x seems wrong. we can look at the execution trace on a stack machine, but it's actually not hard to see. starting from the last node before the output, we see that the tree has the form

    eml(z, eml(x, 1))
      = e^z - ln(eml(x, 1))
      = e^z - ln(e^x)
      = e^z - x
and the claim is that, after it's expanded, z will be such that this whole thing is equal to -x. but with some algebra, this is happening only if

    e^z = 0,
and there is no complex number z that satisfies this equation. indeed if we laboriously expand the given formula for z (the left branch of the tree), we see that it goes through ln(0), and compound expressions.

x^-1 has the same problem.

both formulae work ...sort of... if we allow ln(0) = Infinity and some other moxie, such as x / Infinity = 0 for all finite x.

yeah, it's annoying that author talks about RPN notation, but only gives found formulas in form of images

looks like it computes ln(1)=0, then computes e-ln(0)=+inf, then computes e-ln(+inf)=-inf

ah, the paper acknowledges this. my bad for jumping to the diagrams!

On page 11, the paper explicitly states:

> EML-compiled formulas work flawlessly in symbolic Mathematica and IEEE754 floating-point… This is because some formulas internally might rely on the following properties of extended reals: ln 0 = −∞, e^(−∞) = 0.

And then follows with:

> But EML expressions in general do not work ‘out of the box’ in pure Python/Julia or numerical Mathematica.

Thus, the paper’s completeness claim depends on a non-standard arithmetic convention (ln(0) = -∞), not just complex numbers as it primarily advertises. While the paper is transparent about this, it is however, buried on page 11 rather than foregrounded as a core caveat. Your comment deserves credit for flagging it.

I would not call a "non-standard arithmetic convention" that ln(0) = -∞.

This is the standard convention when doing operations in the extended real number line, i.e. in the set of the real numbers completed with positive and negative infinities.

When the overflow exception is disabled, any modern CPU implements the operations with floating-point numbers as operations in the extended real number line.

So in computing this convention has been standard for more than 40 years, while in mathematics it has been standard for a couple of centuries or so.

As always in mathematics, when computing expressions, i.e. when computing any kind of function, you must be aware very well which are the sets within which you operate.

If you work with real numbers (i.e. in a computer you enable the FP overflow exception), then ln(0) is undefined. However, if you work with the extended real number line, which is actually the default setting in most current programming languages, then ln(0) is well defined and it is -∞.

Apparently Python throws an exception. This surprised me, I expected it to only throw for integers. Throwing for floats is weird and unsafe.

    >>> import math
    >>> math.log(0.0)
    Traceback (most recent call last):
      File "<python-input-2>", line 1, in <module>
        math.log(0.0)
        ~~~~~~~~^^^^^
    ValueError: expected a positive input, got 0.0
though if you use numpy floats you only get a warning:

    >>> import numpy as np
    >>> np.log(np.float64(0))
    <python-input-1>:1: RuntimeWarning: divide by zero encountered in log
    np.float64(-inf)
JavaScript works as expected:

    > Math.log(0.0)
    -Infinity
    > Math.log(-0.0)
    -Infinity
    > Math.log(-0.1)
    NaN

The author does address this further on page 14 of SI and provides an alternative of:

−z = 1 − (e − ((e − 1) − z))

So here is the problem -- we have two constructions of -z.

Whether or not this shows up in a tree somewhere if you try to compose functions together is probably undecideable.

Either they aren't equal and you've broken any tree that includes that construction of -z anywhere in it, _or_ you have two trees which _are_ equal, but disagree on their value at every point

Any rule that tries to rewrite one form to the other is unsound

The lack of any equational theory makes a lot of claims about it fairly nonsensical.

I spent a few days playing around with this work in Lean and his central claim is provably wrong..

The main problem is that singularities infect everything, and you can't have an equational theory without rewrite/substitution rules that aren't grounded unless you can decide if an arbitrary EML tree is zero, which is undecidable in elementary functions (because of sine).

Basically it's only valid on a undecideable subset. All of his numerical tests are carefully crafted to avoid singularities which are exactly where it fails, and the singularities are all over the place -- in particular in subtraction (which shouldn't have any!). He wants to sort of compile it to actually computable functions and use those instead, but there is no equational theory possible that you can build with this.

You can't restrict it to the positive reals either or it's not closed, it's trivially easy to get to complex or negative numbers.

Using extended reals doesn't fix it, you just get different undefined terms (-inf + inf = 0).

It's quite pretty! I love the idea or I wouldn't have spent so much time on it. It just doesn't work, and none of his other candidates will work either because they all have ln(0).

The only sense in which it's true which is that it can generate the elementary functions that it can generate, which is just tautological. It can in no sense generate all of them.

Even his verification doesn't work because they set it up to check the narrow bands where it's valid, outside of that it's a mess.

Thinking about this some more. It having a NAND style combinator for elementary functions is probably impossible. ln(x) is inescapable as part of the composition in the generator somewhere, and you can't generate subtraction without passing through it which means the whole thing doomed as a project.