The review doesn't mention branch prediction. Call/ret instructions work really well with branch predictors. Lots of jumps (to continuations) might not work quite as well.

From the review (discussing "practical applications of continuations"):

> Was it influential to the field of computer science?

Call/cc was fairly popular in the North West of the US (maybe California too) among Lisp and Scheme people. Compiling them well on real machines was definitely a topic worth investigating at the time.

https://en.wikipedia.org/wiki/Call-with-current-continuation

Call/cc has since (thankfully) become a lot less popular.

I liked the book a lot when I read it (2-3 decades ago) but it didn't seem relevant in the slightest if you just didn't care about call/cc.

> Call/ret instructions work really well with branch predictors. Lots of jumps (to continuations) might not work quite as well.

On x86, the use of paired call/return is conflated with usage of the stack to store activation records. On AArch64, indirect branches can be marked with a hint bit indicating that they are a call or return, so branch prediction can work exactly the same with activation records elsewhere.

CPS and call/cc aren’t the same thing though they are close.

CPS is an intermediate form that a compiler can use to reason about a program. You could argue that using this form makes it easier to implement call/cc, but I don’t think that’s really true.

I vaguely recall that CPS makes it possible to reason about stack disciplined calls, if you want to compile to call/ret instructions.

But whatever, it really doesn’t matter. CPS is quite dead as an IR. SSA won

CPS is fairly dead as an IR, but the (local) CPS transform seems more popular than ever with languages implementing "stackless" control effects.

As far as functional IRs go, I would say SSA corresponds most directly to (a first-order restriction of) ANF w/ join points. The main difference being the use of dominance-based scoping rules, which is certainly more convenient than juggling binders when writing transformations. The first-order restriction isn't even essential, e.g. https://compilers.cs.uni-saarland.de/papers/lkh15_cgo.pdf.

If you're interested in an IR that can express continuations (or evaluation contexts) in a first-class way, a much better choice than CPS is an IR based on the sequent calculus. As I'm sure you know (since you work with one of the coauthors), this was first experimented with in a practical way in GHC (https://pauldownen.com/publications/scfp_ext.pdf), but there is a paper in this year's OOPSLA (https://dl.acm.org/doi/10.1145/3720507) that explores this more fundamentally, without the architectural constraint of being compatible with all of the other decisions already made in GHC. Of course, one could add dominance scoping etc. to make an extension of SSA with more symmetry between producers and consumers like the classical sequent calculus.

The C in CPS and the cc in call/cc are exactly the same thing.

It’s a continuation. You don’t need to express your program in continuation passing style to use continuation which is why call/cc exists.

The idea of continuation is interesting in and of itself independently of if your compiler uses CPS because continuation as a concept is useful. It appears in effect system for exemple.

Apple book is very good by the way. It gives a very hand on overview of how to implement a compiler in a functional style and neatly introduces some quite complex ideas. To me it’s amongst the books you can’t regret reading also it’s quite short and easy which helps. Timeless classic like Peyton Jones The implementation of functional programming languages and is great introduction to lambda calculus and presentation of how to implement a type checker in Miranda.

It might be a cool book but it describes an outdated way to write compilers, even if you’re writing compilers for functional languages

What does that mean?

The book introduces how to turn a program to CPS, why you can and how that allows to compile. That’s interesting in and of itself as a way to conceptualise how a program computation flows work and what it means for the construction of functional programs.

It was never a popular way to write compilers but academic books are not tutorial. That never was the point.

CPS is not that different from SSA.

https://bernsteinbear.com/assets/img/kelsey-ssa-cps.pdf

They are so extremely and utterly different.

CPS is an AST form with first class functions that don’t return. CPS is highly opinionated about how control flow is represented and not very opinionated about how data flow is represented (maybe it’s just variable names you resolve, maybe it’s data flow).

SSA is a data flow graph with a way to express looping. SSA isn’t very opinionated about control flow and different SSA implementations do it differently.

That paper is just saying that you can transform between the two. So what. Of course you can; you can transform between any Turing complete languages and that doesn’t make them “not that different” and definitely not the same.

> Call/cc has since (thankfully) become a lot less popular.

Delimited continuations are the state of the art now, aren't they?

Dunno. I really dislike continuations (but like Appel's book(s)) so I haven't kept up. Also, Oleg has a reputation for invention type mountains for lots of molehill problems so I'm naturally inclined to dismiss his ideas pretty much out of hand.