Say you want to do obj.child.foo[3].bar += 2 but without mutation, but instead all the data is immutable and you need to do a deep copy along the path.
Lenses are an embedded dsl for doing this via syntax that reads similar to to the mutable variant.
Additionally it allows to compose many of such transformations.
The other replies covered the answer about immutability well, but I have the further question: why isn't this built into languages as syntax sugar, so that OP's suggested line would work with immutable structures?
As a dilettante at programming language design, I have my own toy language. It uses exclusively immutable data structures (C++ "immer"). I present it to the programmer as simple value semantics. `obj.foo[5].bar.a = 2` works and sets `obj` to a new structure where the path through `foo`, `bar`, to `a` has all been rewritten. Since I put it in as a language feature, users don't have to learn about lenses. Why isn't this technique more common in programming language design? Is it so offensive that the syntax `obj.a = 2` ends up rebinding `obj` itself? The rule in my language is that assignment rebinds the leftmost "base" of a chain of `.` field and `[]` array index accesses on the LHS. I'm ignorant of the theory or practical consideration that might lead a competent designer not to implement it this way.
It's an interesting question, why immutability is not built into more languages as the default, so that the most intuitive syntax of assignment produces new values.
Without having any expertise in the matter, I'd guess that mutability has the advantage of performance and efficient handling of memory.
obj.foo[5].bar.a = 2
An immutable interpretation of this would involve producing new objects and arrays, moving or copying values.
Another possible advantage of the mutable default is that you can pass around references to inner values.
That's always the case with immutable data structures; this assignment syntax didn't create that problem. If you used lenses to write 2 into "a", and you expected to get back a new "obj", you would still need to produce all those new objects and arrays. That's just immutable data structure stuff. I'm only asking about the assignment syntax here.
One of the reasons functional languages like to make mutable code look very different from immutable code is to try to make it clear which is which when reading the code, to help avoid mistakes.
> Is it so offensive that the syntax `obj.a = 2` ends up rebinding `obj` itself?
That does imply that the `obj` binding itself is mutable, so if you are trying for entirely immutable data structures (by default), you do probably want to avoid that.
This is why the syntax sugar, in languages that have been exploring syntax sugar for it, starts to look like:
let newObj = { obj with a = 2 }
You still want to be able to name the new immutable binding.
My language doesn't have any other kind; it's not immutable by default, it's immutable only. There are no mutable types or reference semantics, so there's no other kind of type that I need to differentiate. That's my question--why haven't other languages taken this approach? Many newer languages today are full-throated defenses of immutable data structures--why do they still make the mutable structures the easiest, syntactically, to change? Why not the other way around? Julia is fastest with immutable structures--why provide a built-in syntax for complex assignment to mutable types, but then relegate lenses to a library that only FP aficionados will use? We don't want add() and subtract() when we have + and -; why should we live with set() when we have =?
I must be missing it because it worked out pretty nicely in my toy language. Complex assignments are written in exactly the way that people expect them to be. That's why I think it must be about taste or practical consideration--obviously it's possible to write a language like this. But experienced designers don't, presumably because it's a bad idea, and I don't understand what the badness is. Since my language is a toy, I likely haven't hit the practical considerations.
Lenses, to me, feel at home in Haskell where the entire language is a game to see how much theory you can implement in the "userspace" of a tight, maximally-orthogonal FP language. But this is Julia, a monstrously large, imperative, Algol-family language with every possible language feature built-in, intended to be a practical language for analysis by people who aren't programming language experts. Julia's compiler already has knowledge of immutable types which it uses for optimization. Seems like they could do better than lenses if they weren't forced to implement it as a library in the language itself.
> Julia is fastest with immutable structures--why provide a built-in syntax for complex assignment to mutable types, but then relegate lenses to a library that only FP aficionados will use?
This is not really accurate. Performance in Julia is heavily organized around mutability, in particular for arrays. The main reason Julia does not fully embrace immutability for everything is, simply, performance.
> But experienced designers don't, presumably because it's a bad idea, and I don't understand what the badness is.
I don't think it's a bad idea, and I don't see anyone else saying that. I did try to give you a couple practical considerations, but I don't think they stop your idea for a toy language from existing or suggest anything you are trying to do is "bad".
> We don't want add() and subtract() when we have + and -; why should we live with set() when we have =?
This question might actually be leading you closer to answers regarding your confusion than you think it is.
One over-simplifying perspective is that imperative languages are the languages that most want operators like + and - and functional languages have most been the languages that want to use add() and subtract() functions. A good functional language wants "everything" to be a function. If you look back at early lisps almost all of them support `(add 2 3)` but not all of them supported `(+ 2 3)`
(ETA: accidental post splice was here.)
Then the functional languages picked up currying, where is useful to refer to the function `(add 2)` as the function that adds two to the next argument and even `(add)` as the function that takes the next two arguments and add them together. In a truly functional language designers often do want `add()` and `subtract()` as reusable, curryable functions more than they want `+` and `-`, because as tools, `add()` and `subtract()` work more like the rest of the languages. As for `set()`, Lisps have almost always only ever had `(let variableName …)` type functions. `=` in most classic functional languages almost always meant an equality check. It's very much imperative languages that gifted us `=` as "assignment" or "set" and then out of consequence of that made equality double (or worse, triple) `==`.
It's only this far after imperative languages have "won" as much as they have, and have proven favored uses for complicated "PEMDAS" parsers that infix operators have become so common. (It's not quite a universal fact, but a lot of functional languages have had much simpler parsers than their imperative language neighbors. Infix operators are a huge complicated thing to parse, if you haven't already noticed in your toy language.)
You denigrate Haskell off hand, but a thing I appreciate that is relevant to all this is that Haskell was also one of the first languages to try hard to strongly merge the two worlds: it supports infix usage of any function, and the infix operators of the imperative world aren't that special syntactically, they are just infix functions. This is also why you'll see a lot of Haskell documentation refer to it is `(+)` instead of `+`, because `(+)` is the "real name" of the function and `+` is just the infix form. Haskell wants an `add()` and `subtract()`, it calls them `(+)` and `(-)`. It supports currying like `(+) 2` can be a function.
A functional programming language sort of wants everything to be a function and operators are a special name of a function. Many functional languages, both historic and current do ask "why do we need + and - when we have (add) and (subtract)?" and even "why do we need = when we have (let)?" (Maybe useful to note too the subtly different imperative versus functional language instincts on where the parentheses go when discussing function names. Imperative languages often as a suffix, almost like an afterthought, and functional languages often surrounding to direct attention inside.)
You suggest several times that lenses are "theory" and "only FP aficionados will use", but lenses are pretty "basic" and boring" from the perspective of "everything is functions". You don't need a lot of theory to understand lenses, even if the goofy sounding name sometimes makes it sound far more complicated than it is.
Which isn't to say that languages can't do better with syntactic sugar, just that one of the reasons this often isn't handled with syntactic sugar from a functional programming perspective is "why would it need to be? it's very simple". Immutable types have a longer history in functional programming languages, so their view of what is "simple" and what should be "syntactic sugar" is maybe obvious to explain from their very different perspective.
There is absolutely a lot of space to keep exploring new syntactic sugar and increasingly better ways for functional languages to take the best ideas of imperative languages. (Again, I appreciate the light humor that Haskell has the reputation today of being the language most drowned in FP theory, but also knowing it has been one of the languages that has done and absurd amount for exploring imperative syntax from a functional standpoint, both in the way that infix operators are just infix syntax for functions, and in things like do-notation.)
Please keep playing with your toy language and imperative/mutable syntax for immutable data structures, I think that's great. I've done similar experiments in my own toy languages. I think the answer to why "big languages" haven't done it yet, isn't because it is a "bad" idea, but because it is a matter of perspective. Most functional languages don't want imperative syntax or don't want functional/immutable things to look like imperative/mutable syntax. Again, not because it is "bad", just because they have very different family trees.
Oh, I meant that I suspect it's a bad idea. I've got a gut feeling that I can't put my finger on. I implemented it anyway because my language's design "pointed" in that direction based on prior decisions I had made (re: not having any reference semantics), but I can't shake the feeling that I've created something internally consistent but confusing to people trying to learn it. Or that it will hit a wall at some point and I'll suddenly realize why this is not done. I'll have to think about it some more. Thanks for the discussion!
The difference doesn't matter when you have a shallow structure and can access fields directly and have a few lines of code. But field access does not compose easily if you have a nested hierarchy of objects. Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set. Say you get bored of the tedium and want "higher-order" accessors that compose well -- because ultimately all look-up operations are fundamentally similar in a sense, and you only need to write traversals once per data structure. Eg: Instead of writing yet another depth-first search implementation with for loops, you could easily tie together a standard DFS implementation (traversal) from a library, with accessors for the fields you care to work with.
One way to think of the goal of functional paradigm is to allow extreme modularity (reuse) with minimal boilerplate [1]. The belief is minimal boilerplace + maximum reuse (not in ad-hoc ways, but using the strict structure of higher-order patterns) leads to easily maintainable bug-free code -- especially in rapidly evolving codebases -- for the one-time cost of understanding these higher-order abstractions. This is why people keep harping on pieces that "compose well". The emphasis on immutability is merely a means to achieve that goal, and lenses are part of the solution to allow great ergonomics (composability) along with immutability. For the general idea, look at this illustrative blog post [2] which rewrites the same small code block ten times -- making it more modular and terse each time.
Once the language is expressive enough to compose pieces well and write extremely modular code, the next bit that people get excited about is smart compilers that can: transform this to efficient low-level implementations (eg. by fusing accesses), enforce round-trip consistency between get & set lenses (or complain about flaws), etc.
> But field access does not compose easily if you have a nested hierarchy of objects. Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set.
This is a self inflicted problem. Make data public and there is no boilerplate.
> Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set.
Your natural alternative to lenses in imperative languages is usually to just store a reference or pointer to the part you want to modify. Like a lens, but in-place.
Yeah, but I’m saying that in 90% of the cases where a functional program would use lenses, the corresponding imperative program would just use references.
Sure, but can you make that imperative program (with pointers and all) as modular/composable? That's the whole point -- lenses are not an end unto themselves; only a tool in service of that goal.
Lenses serve many purposes. All I’m saying is that in practice, the most common role they fulfil is to act as a counterpart for mutable references in contexts where you want or need immutability.
Can the use of lenses make a program more “composable”? Maybe, but if you have an example of a program taking advantage of that flexibility I’d like to see it.
Do check out the links in my original comment above, which explain that the whole motivation behind all this (of which lenses are just a small part) is modularity. Modularity and composability are two sides of the same coin---being able to construct a complex whole by combining simple parts---depending on whether you view it top-down or bottom-up.
Suppose you refactor a field `T.e.a.d` to `T.e.b.d`, for whatever reasons. How many places in your codebase will you have to edit, to complete this change?
Dot access exposes to the outside world the implementation details of where `d` lives, while lenses allow you to abstract that as yet another function application (of a "deep-getter" function) so your code becomes extremely modular and flexible. A good language implementation then hopefully allows you to use this abstraction/indirection without a significant performance penalty.
Yup, that’s basically the idea behind lenses, once you add a few more ergonomic niceties.
The Haskell approach is to take any pattern, abstract it out into a library, and reuse instead of ever having to implement that plumbing again I.e. a very generic get/set_foo which could specialize to specific fields/structures. Following that, you could also write a lenses library in Cpp if you don’t want to redo this for every project.
The point is not that it can’t be done in non-functional languages, but that it’s an uncommon pattern AFAICT; the common approaches result in much less modular code.
Lenses also let you take interesting alternate perspectives on your data. You can have a lens that indexes into a bit of an integer, letting you get/set a boolean, for example.
You can uhhh abstract over the property which seems cool if you’re into abstracting things but also probably shouldn’t be the thing you’re abstracting over in application code.
Or on second look the sibling comment is probably right and it’s about immutability maybe.
Counterintuitively Julia recommends the use of immutable data types for performance reasons, because immutability enables more flexible compiler optimisations
An immutable variable can be savely shared across functions or even threads without copying.
It can be created on the stack, heap or in a register, whatever the compiler deems most efficient.
In the case, where you want to change a field of an immutable variable (the use case of lenses), immutable types may still be more efficient, because the variable was stack allocated and copying it is cheap or the compiler can correctly infer, that the original object is not in use anymore and thus reuses the data of the old variable for the new one.
Coming from the C++ world, I think immutability by default is pretty need, because it enables many of the optimisations you would get from C++'s move semantics (or Rust's borrow checker) without the hassle.
There is nothing counter-intuitive or julia-specific about it:
Fastest way is to have your datastructure in a (virtual) register, and that works better with immutable structures (ie memory2ssa has limitations). Second fastest way is to have your datastructure allocated on the heap and mutate it. Slowest way is to have your datastructure allocated on the heap, have it immutable, copy it all the time, and then let the old copies get garbage collected. The last slowest way is exactly what many "functional" languages end up doing. (exception: Read-copy-update is often a very good strategy in multi-threading, and is relatively painless thanks to the GC)
The original post was about local variables -- and const declarations for local variables are mostly syntactic sugar, the compiler puts it into SSA form anyway (exception: const in C if you take the address of the variable and let that pointer escape).
So this is mostly the same as in every language: You need to learn what patterns allow the current compiler version to put your stuff into registers, and then use these patterns. I.e. you need to read a lot of assembly / llvm-IR until you get a feeling for it, and refresh your feelings with every compiler update. Most intuitions are similar to Rust/clang C/C++ (it's llvm, duh!), so you should be right at home if you regularly read compiler output.
Julia has excellent tooling to read the generated assembly/IR; much more convenient than java (bytecode is irrelevant, you need to read assembly or learn to read graal/C2 IR; and that is extremely inconvenient).
It's a similar idea to map() but for more complex objects than arrays. When people use "map" in Javascript (or most any other language that supports it) do they do so because "they are terrified of mutability, and are willing to abandon performance?"
Your comment reads like the response of someone who is struggling to understand a concept.
Yes. O(1) snapshots are awesome! Persistent datastructures are a monumental achievement.
But that comes at a performance price, and in the end, you only really need persistent datastructures for niche applications.
Good examples are: ZFS mostly solves write amplification on SSD (it almost never overwrites memory); and snapshots are a useful feature for the end user. (but mostly your datastructures live in SRAM/DRAM which permit fast overwriting, not flash -- so that's a niche application)
Another good example is how julia uses a HAMT / persistent hash-map to implement scoped values. Scoped values are inheritable threadlocals (tasklocal; in julia parlance, virtual/green thread == task), and you need to take a snapshot on forking.
Somebody please implement that for inheritable threadlocals in java! (such that you can pass an O(1) snapshot instead of copying the hashmap on thread creation)
But that is also a niche application. It makes zero sense to use these awesome fancy persistent datastructures as default everywhere (looking at you, scala!).
Say you want to do obj.child.foo[3].bar += 2 but without mutation, but instead all the data is immutable and you need to do a deep copy along the path.
Lenses are an embedded dsl for doing this via syntax that reads similar to to the mutable variant. Additionally it allows to compose many of such transformations.
The other replies covered the answer about immutability well, but I have the further question: why isn't this built into languages as syntax sugar, so that OP's suggested line would work with immutable structures?
As a dilettante at programming language design, I have my own toy language. It uses exclusively immutable data structures (C++ "immer"). I present it to the programmer as simple value semantics. `obj.foo[5].bar.a = 2` works and sets `obj` to a new structure where the path through `foo`, `bar`, to `a` has all been rewritten. Since I put it in as a language feature, users don't have to learn about lenses. Why isn't this technique more common in programming language design? Is it so offensive that the syntax `obj.a = 2` ends up rebinding `obj` itself? The rule in my language is that assignment rebinds the leftmost "base" of a chain of `.` field and `[]` array index accesses on the LHS. I'm ignorant of the theory or practical consideration that might lead a competent designer not to implement it this way.
It's an interesting question, why immutability is not built into more languages as the default, so that the most intuitive syntax of assignment produces new values.
Without having any expertise in the matter, I'd guess that mutability has the advantage of performance and efficient handling of memory.
An immutable interpretation of this would involve producing new objects and arrays, moving or copying values.Another possible advantage of the mutable default is that you can pass around references to inner values.
That's always the case with immutable data structures; this assignment syntax didn't create that problem. If you used lenses to write 2 into "a", and you expected to get back a new "obj", you would still need to produce all those new objects and arrays. That's just immutable data structure stuff. I'm only asking about the assignment syntax here.
One of the reasons functional languages like to make mutable code look very different from immutable code is to try to make it clear which is which when reading the code, to help avoid mistakes.
> Is it so offensive that the syntax `obj.a = 2` ends up rebinding `obj` itself?
That does imply that the `obj` binding itself is mutable, so if you are trying for entirely immutable data structures (by default), you do probably want to avoid that.
This is why the syntax sugar, in languages that have been exploring syntax sugar for it, starts to look like:
You still want to be able to name the new immutable binding.My language doesn't have any other kind; it's not immutable by default, it's immutable only. There are no mutable types or reference semantics, so there's no other kind of type that I need to differentiate. That's my question--why haven't other languages taken this approach? Many newer languages today are full-throated defenses of immutable data structures--why do they still make the mutable structures the easiest, syntactically, to change? Why not the other way around? Julia is fastest with immutable structures--why provide a built-in syntax for complex assignment to mutable types, but then relegate lenses to a library that only FP aficionados will use? We don't want add() and subtract() when we have + and -; why should we live with set() when we have =?
I must be missing it because it worked out pretty nicely in my toy language. Complex assignments are written in exactly the way that people expect them to be. That's why I think it must be about taste or practical consideration--obviously it's possible to write a language like this. But experienced designers don't, presumably because it's a bad idea, and I don't understand what the badness is. Since my language is a toy, I likely haven't hit the practical considerations.
Lenses, to me, feel at home in Haskell where the entire language is a game to see how much theory you can implement in the "userspace" of a tight, maximally-orthogonal FP language. But this is Julia, a monstrously large, imperative, Algol-family language with every possible language feature built-in, intended to be a practical language for analysis by people who aren't programming language experts. Julia's compiler already has knowledge of immutable types which it uses for optimization. Seems like they could do better than lenses if they weren't forced to implement it as a library in the language itself.
> Julia is fastest with immutable structures--why provide a built-in syntax for complex assignment to mutable types, but then relegate lenses to a library that only FP aficionados will use?
This is not really accurate. Performance in Julia is heavily organized around mutability, in particular for arrays. The main reason Julia does not fully embrace immutability for everything is, simply, performance.
There is some discussion about this from smarter people than me over here (same thread): https://news.ycombinator.com/item?id=45769149
> Counterintuitively Julia recommends the use of immutable data types for performance reasons...
Those folks are likely more able to respond to your counterclaim than me.
> But experienced designers don't, presumably because it's a bad idea, and I don't understand what the badness is.
I don't think it's a bad idea, and I don't see anyone else saying that. I did try to give you a couple practical considerations, but I don't think they stop your idea for a toy language from existing or suggest anything you are trying to do is "bad".
> We don't want add() and subtract() when we have + and -; why should we live with set() when we have =?
This question might actually be leading you closer to answers regarding your confusion than you think it is.
One over-simplifying perspective is that imperative languages are the languages that most want operators like + and - and functional languages have most been the languages that want to use add() and subtract() functions. A good functional language wants "everything" to be a function. If you look back at early lisps almost all of them support `(add 2 3)` but not all of them supported `(+ 2 3)`
(ETA: accidental post splice was here.)
Then the functional languages picked up currying, where is useful to refer to the function `(add 2)` as the function that adds two to the next argument and even `(add)` as the function that takes the next two arguments and add them together. In a truly functional language designers often do want `add()` and `subtract()` as reusable, curryable functions more than they want `+` and `-`, because as tools, `add()` and `subtract()` work more like the rest of the languages. As for `set()`, Lisps have almost always only ever had `(let variableName …)` type functions. `=` in most classic functional languages almost always meant an equality check. It's very much imperative languages that gifted us `=` as "assignment" or "set" and then out of consequence of that made equality double (or worse, triple) `==`.
It's only this far after imperative languages have "won" as much as they have, and have proven favored uses for complicated "PEMDAS" parsers that infix operators have become so common. (It's not quite a universal fact, but a lot of functional languages have had much simpler parsers than their imperative language neighbors. Infix operators are a huge complicated thing to parse, if you haven't already noticed in your toy language.)
You denigrate Haskell off hand, but a thing I appreciate that is relevant to all this is that Haskell was also one of the first languages to try hard to strongly merge the two worlds: it supports infix usage of any function, and the infix operators of the imperative world aren't that special syntactically, they are just infix functions. This is also why you'll see a lot of Haskell documentation refer to it is `(+)` instead of `+`, because `(+)` is the "real name" of the function and `+` is just the infix form. Haskell wants an `add()` and `subtract()`, it calls them `(+)` and `(-)`. It supports currying like `(+) 2` can be a function.
A functional programming language sort of wants everything to be a function and operators are a special name of a function. Many functional languages, both historic and current do ask "why do we need + and - when we have (add) and (subtract)?" and even "why do we need = when we have (let)?" (Maybe useful to note too the subtly different imperative versus functional language instincts on where the parentheses go when discussing function names. Imperative languages often as a suffix, almost like an afterthought, and functional languages often surrounding to direct attention inside.)
You suggest several times that lenses are "theory" and "only FP aficionados will use", but lenses are pretty "basic" and boring" from the perspective of "everything is functions". You don't need a lot of theory to understand lenses, even if the goofy sounding name sometimes makes it sound far more complicated than it is.
Which isn't to say that languages can't do better with syntactic sugar, just that one of the reasons this often isn't handled with syntactic sugar from a functional programming perspective is "why would it need to be? it's very simple". Immutable types have a longer history in functional programming languages, so their view of what is "simple" and what should be "syntactic sugar" is maybe obvious to explain from their very different perspective.
There is absolutely a lot of space to keep exploring new syntactic sugar and increasingly better ways for functional languages to take the best ideas of imperative languages. (Again, I appreciate the light humor that Haskell has the reputation today of being the language most drowned in FP theory, but also knowing it has been one of the languages that has done and absurd amount for exploring imperative syntax from a functional standpoint, both in the way that infix operators are just infix syntax for functions, and in things like do-notation.)
Please keep playing with your toy language and imperative/mutable syntax for immutable data structures, I think that's great. I've done similar experiments in my own toy languages. I think the answer to why "big languages" haven't done it yet, isn't because it is a "bad" idea, but because it is a matter of perspective. Most functional languages don't want imperative syntax or don't want functional/immutable things to look like imperative/mutable syntax. Again, not because it is "bad", just because they have very different family trees.
Oh, I meant that I suspect it's a bad idea. I've got a gut feeling that I can't put my finger on. I implemented it anyway because my language's design "pointed" in that direction based on prior decisions I had made (re: not having any reference semantics), but I can't shake the feeling that I've created something internally consistent but confusing to people trying to learn it. Or that it will hit a wall at some point and I'll suddenly realize why this is not done. I'll have to think about it some more. Thanks for the discussion!
Ah, that makes sense. I went on to a longer length on that, but apparently my browser accidentally posted a partially complete draft.
Hopefully the remainder post adds additional perspective.
The difference doesn't matter when you have a shallow structure and can access fields directly and have a few lines of code. But field access does not compose easily if you have a nested hierarchy of objects. Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set. Say you get bored of the tedium and want "higher-order" accessors that compose well -- because ultimately all look-up operations are fundamentally similar in a sense, and you only need to write traversals once per data structure. Eg: Instead of writing yet another depth-first search implementation with for loops, you could easily tie together a standard DFS implementation (traversal) from a library, with accessors for the fields you care to work with.
One way to think of the goal of functional paradigm is to allow extreme modularity (reuse) with minimal boilerplate [1]. The belief is minimal boilerplace + maximum reuse (not in ad-hoc ways, but using the strict structure of higher-order patterns) leads to easily maintainable bug-free code -- especially in rapidly evolving codebases -- for the one-time cost of understanding these higher-order abstractions. This is why people keep harping on pieces that "compose well". The emphasis on immutability is merely a means to achieve that goal, and lenses are part of the solution to allow great ergonomics (composability) along with immutability. For the general idea, look at this illustrative blog post [2] which rewrites the same small code block ten times -- making it more modular and terse each time.
[1] https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.p...
[2] https://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Wa...
Once the language is expressive enough to compose pieces well and write extremely modular code, the next bit that people get excited about is smart compilers that can: transform this to efficient low-level implementations (eg. by fusing accesses), enforce round-trip consistency between get & set lenses (or complain about flaws), etc.
> But field access does not compose easily if you have a nested hierarchy of objects. Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set.
This is a self inflicted problem. Make data public and there is no boilerplate.
> Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set.
Your natural alternative to lenses in imperative languages is usually to just store a reference or pointer to the part you want to modify. Like a lens, but in-place.
But then you're modifying the thing, not creating a new object with different content. It's different semantics.
Yeah, but I’m saying that in 90% of the cases where a functional program would use lenses, the corresponding imperative program would just use references.
Sure, but can you make that imperative program (with pointers and all) as modular/composable? That's the whole point -- lenses are not an end unto themselves; only a tool in service of that goal.
I think we’re talking past each other.
Lenses serve many purposes. All I’m saying is that in practice, the most common role they fulfil is to act as a counterpart for mutable references in contexts where you want or need immutability.
Can the use of lenses make a program more “composable”? Maybe, but if you have an example of a program taking advantage of that flexibility I’d like to see it.
Do check out the links in my original comment above, which explain that the whole motivation behind all this (of which lenses are just a small part) is modularity. Modularity and composability are two sides of the same coin---being able to construct a complex whole by combining simple parts---depending on whether you view it top-down or bottom-up.
Suppose you refactor a field `T.e.a.d` to `T.e.b.d`, for whatever reasons. How many places in your codebase will you have to edit, to complete this change?
Dot access exposes to the outside world the implementation details of where `d` lives, while lenses allow you to abstract that as yet another function application (of a "deep-getter" function) so your code becomes extremely modular and flexible. A good language implementation then hopefully allows you to use this abstraction/indirection without a significant performance penalty.
void set_d(T*);
Yup, that’s basically the idea behind lenses, once you add a few more ergonomic niceties.
The Haskell approach is to take any pattern, abstract it out into a library, and reuse instead of ever having to implement that plumbing again I.e. a very generic get/set_foo which could specialize to specific fields/structures. Following that, you could also write a lenses library in Cpp if you don’t want to redo this for every project.
The point is not that it can’t be done in non-functional languages, but that it’s an uncommon pattern AFAICT; the common approaches result in much less modular code.
Lenses also let you take interesting alternate perspectives on your data. You can have a lens that indexes into a bit of an integer, letting you get/set a boolean, for example.
Immutability is a central concept in functional programming.
You can uhhh abstract over the property which seems cool if you’re into abstracting things but also probably shouldn’t be the thing you’re abstracting over in application code.
Or on second look the sibling comment is probably right and it’s about immutability maybe.
This is equivalent to that for people who are irrationally terrified of mutability, and are willing to abandon performance.
Counterintuitively Julia recommends the use of immutable data types for performance reasons, because immutability enables more flexible compiler optimisations
An immutable variable can be savely shared across functions or even threads without copying. It can be created on the stack, heap or in a register, whatever the compiler deems most efficient.
In the case, where you want to change a field of an immutable variable (the use case of lenses), immutable types may still be more efficient, because the variable was stack allocated and copying it is cheap or the compiler can correctly infer, that the original object is not in use anymore and thus reuses the data of the old variable for the new one.
Coming from the C++ world, I think immutability by default is pretty need, because it enables many of the optimisations you would get from C++'s move semantics (or Rust's borrow checker) without the hassle.
There is nothing counter-intuitive or julia-specific about it:
Fastest way is to have your datastructure in a (virtual) register, and that works better with immutable structures (ie memory2ssa has limitations). Second fastest way is to have your datastructure allocated on the heap and mutate it. Slowest way is to have your datastructure allocated on the heap, have it immutable, copy it all the time, and then let the old copies get garbage collected. The last slowest way is exactly what many "functional" languages end up doing. (exception: Read-copy-update is often a very good strategy in multi-threading, and is relatively painless thanks to the GC)
The original post was about local variables -- and const declarations for local variables are mostly syntactic sugar, the compiler puts it into SSA form anyway (exception: const in C if you take the address of the variable and let that pointer escape).
So this is mostly the same as in every language: You need to learn what patterns allow the current compiler version to put your stuff into registers, and then use these patterns. I.e. you need to read a lot of assembly / llvm-IR until you get a feeling for it, and refresh your feelings with every compiler update. Most intuitions are similar to Rust/clang C/C++ (it's llvm, duh!), so you should be right at home if you regularly read compiler output.
Julia has excellent tooling to read the generated assembly/IR; much more convenient than java (bytecode is irrelevant, you need to read assembly or learn to read graal/C2 IR; and that is extremely inconvenient).
It's a similar idea to map() but for more complex objects than arrays. When people use "map" in Javascript (or most any other language that supports it) do they do so because "they are terrified of mutability, and are willing to abandon performance?"
Your comment reads like the response of someone who is struggling to understand a concept.
Only the get half is `map`-like. In combination it's more like a property descriptor, which is far easier to understand and much more efficient.
And, if it wasn't obvious, it's only the `set` half where lenses suck for performance.
Immutability gives you persistence, which can be practically useful. It’s not just fear.
Yes. O(1) snapshots are awesome! Persistent datastructures are a monumental achievement.
But that comes at a performance price, and in the end, you only really need persistent datastructures for niche applications.
Good examples are: ZFS mostly solves write amplification on SSD (it almost never overwrites memory); and snapshots are a useful feature for the end user. (but mostly your datastructures live in SRAM/DRAM which permit fast overwriting, not flash -- so that's a niche application)
Another good example is how julia uses a HAMT / persistent hash-map to implement scoped values. Scoped values are inheritable threadlocals (tasklocal; in julia parlance, virtual/green thread == task), and you need to take a snapshot on forking.
Somebody please implement that for inheritable threadlocals in java! (such that you can pass an O(1) snapshot instead of copying the hashmap on thread creation)
But that is also a niche application. It makes zero sense to use these awesome fancy persistent datastructures as default everywhere (looking at you, scala!).