The difference doesn't matter when you have a shallow structure and can access fields directly and have a few lines of code. But field access does not compose easily if you have a nested hierarchy of objects. Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set. Say you get bored of the tedium and want "higher-order" accessors that compose well -- because ultimately all look-up operations are fundamentally similar in a sense, and you only need to write traversals once per data structure. Eg: Instead of writing yet another depth-first search implementation with for loops, you could easily tie together a standard DFS implementation (traversal) from a library, with accessors for the fields you care to work with.
One way to think of the goal of functional paradigm is to allow extreme modularity (reuse) with minimal boilerplate [1]. The belief is minimal boilerplace + maximum reuse (not in ad-hoc ways, but using the strict structure of higher-order patterns) leads to easily maintainable bug-free code -- especially in rapidly evolving codebases -- for the one-time cost of understanding these higher-order abstractions. This is why people keep harping on pieces that "compose well". The emphasis on immutability is merely a means to achieve that goal, and lenses are part of the solution to allow great ergonomics (composability) along with immutability. For the general idea, look at this illustrative blog post [2] which rewrites the same small code block ten times -- making it more modular and terse each time.
[1] https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.p...
[2] https://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Wa...
Once the language is expressive enough to compose pieces well and write extremely modular code, the next bit that people get excited about is smart compilers that can: transform this to efficient low-level implementations (eg. by fusing accesses), enforce round-trip consistency between get & set lenses (or complain about flaws), etc.
> But field access does not compose easily if you have a nested hierarchy of objects. Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set.
This is a self inflicted problem. Make data public and there is no boilerplate.
> Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set.
Your natural alternative to lenses in imperative languages is usually to just store a reference or pointer to the part you want to modify. Like a lens, but in-place.
But then you're modifying the thing, not creating a new object with different content. It's different semantics.
Yeah, but I’m saying that in 90% of the cases where a functional program would use lenses, the corresponding imperative program would just use references.
Sure, but can you make that imperative program (with pointers and all) as modular/composable? That's the whole point -- lenses are not an end unto themselves; only a tool in service of that goal.
I think we’re talking past each other.
Lenses serve many purposes. All I’m saying is that in practice, the most common role they fulfil is to act as a counterpart for mutable references in contexts where you want or need immutability.
Can the use of lenses make a program more “composable”? Maybe, but if you have an example of a program taking advantage of that flexibility I’d like to see it.
Do check out the links in my original comment above, which explain that the whole motivation behind all this (of which lenses are just a small part) is modularity. Modularity and composability are two sides of the same coin---being able to construct a complex whole by combining simple parts---depending on whether you view it top-down or bottom-up.
Suppose you refactor a field `T.e.a.d` to `T.e.b.d`, for whatever reasons. How many places in your codebase will you have to edit, to complete this change?
Dot access exposes to the outside world the implementation details of where `d` lives, while lenses allow you to abstract that as yet another function application (of a "deep-getter" function) so your code becomes extremely modular and flexible. A good language implementation then hopefully allows you to use this abstraction/indirection without a significant performance penalty.
void set_d(T*);
Yup, that’s basically the idea behind lenses, once you add a few more ergonomic niceties.
The Haskell approach is to take any pattern, abstract it out into a library, and reuse instead of ever having to implement that plumbing again I.e. a very generic get/set_foo which could specialize to specific fields/structures. Following that, you could also write a lenses library in Cpp if you don’t want to redo this for every project.
The point is not that it can’t be done in non-functional languages, but that it’s an uncommon pattern AFAICT; the common approaches result in much less modular code.