> But experienced designers don't, presumably because it's a bad idea, and I don't understand what the badness is.

I don't think it's a bad idea, and I don't see anyone else saying that. I did try to give you a couple practical considerations, but I don't think they stop your idea for a toy language from existing or suggest anything you are trying to do is "bad".

> We don't want add() and subtract() when we have + and -; why should we live with set() when we have =?

This question might actually be leading you closer to answers regarding your confusion than you think it is.

One over-simplifying perspective is that imperative languages are the languages that most want operators like + and - and functional languages have most been the languages that want to use add() and subtract() functions. A good functional language wants "everything" to be a function. If you look back at early lisps almost all of them support `(add 2 3)` but not all of them supported `(+ 2 3)`

(ETA: accidental post splice was here.)

Then the functional languages picked up currying, where is useful to refer to the function `(add 2)` as the function that adds two to the next argument and even `(add)` as the function that takes the next two arguments and add them together. In a truly functional language designers often do want `add()` and `subtract()` as reusable, curryable functions more than they want `+` and `-`, because as tools, `add()` and `subtract()` work more like the rest of the languages. As for `set()`, Lisps have almost always only ever had `(let variableName …)` type functions. `=` in most classic functional languages almost always meant an equality check. It's very much imperative languages that gifted us `=` as "assignment" or "set" and then out of consequence of that made equality double (or worse, triple) `==`.

It's only this far after imperative languages have "won" as much as they have, and have proven favored uses for complicated "PEMDAS" parsers that infix operators have become so common. (It's not quite a universal fact, but a lot of functional languages have had much simpler parsers than their imperative language neighbors. Infix operators are a huge complicated thing to parse, if you haven't already noticed in your toy language.)

You denigrate Haskell off hand, but a thing I appreciate that is relevant to all this is that Haskell was also one of the first languages to try hard to strongly merge the two worlds: it supports infix usage of any function, and the infix operators of the imperative world aren't that special syntactically, they are just infix functions. This is also why you'll see a lot of Haskell documentation refer to it is `(+)` instead of `+`, because `(+)` is the "real name" of the function and `+` is just the infix form. Haskell wants an `add()` and `subtract()`, it calls them `(+)` and `(-)`. It supports currying like `(+) 2` can be a function.

A functional programming language sort of wants everything to be a function and operators are a special name of a function. Many functional languages, both historic and current do ask "why do we need + and - when we have (add) and (subtract)?" and even "why do we need = when we have (let)?" (Maybe useful to note too the subtly different imperative versus functional language instincts on where the parentheses go when discussing function names. Imperative languages often as a suffix, almost like an afterthought, and functional languages often surrounding to direct attention inside.)

You suggest several times that lenses are "theory" and "only FP aficionados will use", but lenses are pretty "basic" and boring" from the perspective of "everything is functions". You don't need a lot of theory to understand lenses, even if the goofy sounding name sometimes makes it sound far more complicated than it is.

Which isn't to say that languages can't do better with syntactic sugar, just that one of the reasons this often isn't handled with syntactic sugar from a functional programming perspective is "why would it need to be? it's very simple". Immutable types have a longer history in functional programming languages, so their view of what is "simple" and what should be "syntactic sugar" is maybe obvious to explain from their very different perspective.

There is absolutely a lot of space to keep exploring new syntactic sugar and increasingly better ways for functional languages to take the best ideas of imperative languages. (Again, I appreciate the light humor that Haskell has the reputation today of being the language most drowned in FP theory, but also knowing it has been one of the languages that has done and absurd amount for exploring imperative syntax from a functional standpoint, both in the way that infix operators are just infix syntax for functions, and in things like do-notation.)

Please keep playing with your toy language and imperative/mutable syntax for immutable data structures, I think that's great. I've done similar experiments in my own toy languages. I think the answer to why "big languages" haven't done it yet, isn't because it is a "bad" idea, but because it is a matter of perspective. Most functional languages don't want imperative syntax or don't want functional/immutable things to look like imperative/mutable syntax. Again, not because it is "bad", just because they have very different family trees.

Oh, I meant that I suspect it's a bad idea. I've got a gut feeling that I can't put my finger on. I implemented it anyway because my language's design "pointed" in that direction based on prior decisions I had made (re: not having any reference semantics), but I can't shake the feeling that I've created something internally consistent but confusing to people trying to learn it. Or that it will hit a wall at some point and I'll suddenly realize why this is not done. I'll have to think about it some more. Thanks for the discussion!

Ah, that makes sense. I went on to a longer length on that, but apparently my browser accidentally posted a partially complete draft.

Hopefully the remainder post adds additional perspective.