102 points by jy14898 3 days ago | 22 comments

Author here. Thanks for posting.

I spent several years' worth of weekends working on this, and I'm glad to see it here on Hacker News.

I started working on this problem when I learned about Lamping's algorithm for optimal lambda reduction [1]. He invented a beautiful algorithm (often referred to as his "abstract algorithm"), which uses fan-in and fan-out nodes to reduce lambda terms optimally (with optimal sharing). Unfortunately, in order to make it fully work, some fans needed to duplicate one another while others needed to cancel one another. To determine this correctly Lamping had to extend his abstract algorithm with several delimiter node types and many additional graph reduction rules. These delimiter nodes perform "bookkeeping", making sure the right fan nodes match. I was dissatisfied with the need for these additional nodes and rules. There had to be a better way.

My goal was to try to implement Lamping's abstract algorithm without adding any delimiter nodes, and to do it under the interaction net paradigm to ensure perfect confluence. I tried countless solutions, and finally Delta-Nets was born. Feel free to ask any questions.

I recently started building a programming language on top of Delta-Nets, called Pur (https://pur.dev/).

Feel free to follow along this journey:

https://x.com/danaugrs

https://x.com/purlanguage

[1] https://dl.acm.org/doi/pdf/10.1145/96709.96711

I linked to the paper in a recent comment [1]. The author has been active in the Higher Order Community Discord channel for quite a while. The Higher Order Company [2] is developing HVM (Higher Order Virtual Machine), a high performance implementation of interaction nets for both CPUs and GPUs, and started the channel to discuss development and related topics, such as the Bend language for programming the HVM, discussed on HN at [3].

The paper manages to present previous work on Levy's optimal beta reduction in a more streamlined fashion, generalizing duplicators to so-called replicators that avoid the need for separate book-keeping gadgets (like croissants and brackets) . Its author also proves himself to be a skilled programmer in bringing his theory to life with this web based evaluator.

The app contrasts traditional graph-based λ-calculus reduction (which replaces nodes with entire subgraphs) with interaction (which makes only local modifications and hence needs more steps), while showing that semantics is preserved.

[1] https://news.ycombinator.com/item?id=46034355

[2] https://higherorderco.com/

[3] https://news.ycombinator.com/item?id=40390287

I find it much easier to see what is going on when selecting λ-calculus instead of Δ-Nets. E.g. for the mandatory Y Combinator,

λf.(λx.f (x x)) (λx.f (x x))

for which the difference with

λf.(λx.f (x x)) (λx.f (x x) f)

is very clear, whereas with Δ-nets the difference is more subtle. I guess it is because the visualization has more information than with the λ-calculus.

reminds me of https://github.com/HigherOrderCO/HVM

I see Salvatore has a fork, so they are obviously aware of it. unsure whether theyre proposing the exact same thing without reference or citation…

This is quite different. Salvadori's work aims for optimal reduction of the full lambda calculus (which requires something called "bookkeeping"/"oracle"), while HOC works on optimal/parallel reduction of a certain subset of the lambda calculus.

Both approaches have been researched for a long time now, where HOC's subset is typically referred to as "abstract algorithm". For example, a version of the lambdas calculus where any variable can be used at most once (the "affine lambda calculus"), can be reduced optimally with interaction nets without requiring any bookkeeping.

The novel thing about Salvadori's work is that it develops a new (and better explained) bookkeeping mechanism.

What is this about? A pointer to a tutorial or a wiki link would be nice for someone who has no idea what this is. Thank you

Cosign, 10 hours in and comments are exclusively people who seemingly know each other already riffing on top of something that's not clear to an audience outside the community, or replying to a coarser version of request with ~0 information. (some tells: referring to other people by first name; having a 1:1 discussion about the meaning of a fork by some other person)

[dead]

What the hell is this?

The linked paper: https://arxiv.org/pdf/2505.20314 claims the squiggles they introduce are apparently a model to solve Levy-optimal parallel reduction of lambda terms.

But the author has no affiliation, it's very weird they're calling this "lambda-reduction" and it heavily smells of AI slop?

I hope I'm wrong but it doesn't look right. Can anyone with expertise in this field chime in?

Author here. Other experts in this field have also used the term "lambda reduction", including Levy himself [1] and Lamping [2], both which are referenced in the Delta-Nets paper. "Lambda-reduction" is clearly an abbreviation of Lambda-calculi reduction.

[1] https://dl.acm.org/doi/abs/10.1145/143165.143172

[2] https://dl.acm.org/doi/pdf/10.1145/96709.96711

HN Guidelines: "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

I'm not being "generically" negative, I'm being very specifically negative.

We have a paper from someone not working in the field, with no affiliation, and with an abstract that claims to "solve the longstanding enigma with groundbreaking clarity", a sentence never before uttered by a human being in flesh and blood, and that feels like it takes 4 (four) citations to justify that lambda calculus is Turing-complete, a fact that's well-known to every undergraduate student.

I'm sorry if this gives reviewer #2 vibes, but this doesn't look right to me and I'm asking if someone with actual expertise in the field can clarify what's happening.

The AI slop statement is harsh. The website looks nice.

The author, Daniel Augusto Rizzi Salvadori' and Github user, 'https://github.com/danaugrs' align. Couldn't comment on the actual content, though.

Affiliation they mean with an institute/company funding their research/work. It's quite rare, if it ever happens, for someone to find an innovative algorithm, let alone write a technical paper, as hobbyist.

What a strange expectation. Obviously humans didn't wait these institutions to create ideas and notations. The fact that these institutions now exist don't make individual with own desire to express novel idea disappear. All the more when these institutions with their elitist mindset will actively reject whatever is going too much of the rails of their own social bubbles.

The interactive lambda-calculus interpreter looks like it does the right thing, not that I've tried to push it too hard.

Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin

I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)

But if I were to go sniffing for red flags:

From the first commit:

lambdacalc.ts: // The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.

What?

util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it

Weak vs. "strong" lambda calculus maybe? Typed vs untyped?

> it's very weird they're calling this "lambda-reduction"

That was my reaction as well, only ever having heard of β-reduction, α-conversion (to prevent variable collisions), and η-reduction (the logical equivalence of a β-redex of a term and a bound variable with the term itself, provided the variable does not occur free in said term). Sloppy use of nomenclature is absolutely a red flag.

The annihilating interaction between abstraction and application nodes is well-known in the area of interaction net research to ~correspond to β-reduction, as is also explained in the associated research paper [1].

α-conversion is not required in interaction nets. η-reduction is an additional rule not typically discussed, but see for example [2].

[1] https://arxiv.org/pdf/2505.20314

[2] https://www.sciencedirect.com/science/article/pii/S030439750...

Yes.

To be transparent: I don't understand this stuff all that well and it's entirely possible I'm missing something, but everything here is weird AF.

- Who is the author? Why he has no affiliation?

- What is the main result of the paper? How does it improve on the state of the art? Even for stuff that's way beyond my pay grade, I can usually tell from the abstract. I'm completely baffled here.

- Why do they introduce graphical notation without corresponding formal definitions?

- Why is it written in this weird style where theorems are left implicit? Usually, there's at least a sketch of proof.

- Why does it not address that the thing they're claiming to do isn't elementary recursive as per https://doi.org/10.1006/inco.2001.2869?

Again, it's entirely possible that it's a skill issue on my part and I'd love to be corrected, but I'm completely baffled and I still have absolutely no idea of what I'm looking at. Am I the stupid one and it's obvious to everyone else?

Note that, in the interaction net literature, it is pretty common to introduce graphical notation without corresponding textual counterparts. See the works of Lafont, Lamping, Asperti, Guerrini, and many others [1]. (However, the textual calculus can be absolutely crucial for formal proof.)

The absence of proofs and benchmarks undermines the paper for me as well. I find it absolutely critical to demonstrate how the approach works in comparison with already established software, such as BOHM [2].

Parallel beta reduction not being elementarily recursive is not a thing to address, nor a thing one can address. Lamping's abstract algorithm already performs the minimal amount of work to reduce a term -- one cannot improve it further.

From my understanding, the paper aims to optimize the bookkeeping overhead present in earlier implementations of optimal reduction. However, as I said, the absence of formal/empirical evidence of correctness and extensive benchmarks makes the contributions of the paper debatable.

[1] https://github.com/etiamz/interaction-net-resources

[2] https://github.com/asperti/BOHM1.1