The interactive lambda-calculus interpreter looks like it does the right thing, not that I've tried to push it too hard.
Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin
I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)
But if I were to go sniffing for red flags:
From the first commit:
lambdacalc.ts: // The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.
What?
util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it
Weak vs. "strong" lambda calculus maybe? Typed vs untyped?