The immutable approach doesn't conflate the concepts of place, time, and abstract identity, like in-place mutation does.

In mutating models, typically abstract (mathematical / conceptual) objects are modeled as memory locations. Which means that object identity implies pointer identity. But that's a problem when different versions of the same object need to be maintained.

It's much easier when we represent object identity by something other than pointer identity, such as (string) names or 32-bit integer keys. Such representation allows us to materialize us different versions (or even the same version) of an object in multiple places, at the same time. This allows us to concurrently read or write different versions of the same abstract object. It's also an enabler for serialization/deserialization. Not requiring an object to be materialized in one particular place allows saving objects to disk or sending them around.

The hardware that these programs are running on store objects in linear memory, so it doesn't not make sense to treat it as such.

DRAM is linear memory. Caches, less so. Register files really aren't. CPUs spend rather a lot of transistors and power to reconcile the reality of how they manipulate data within the core against the external model of RAM in a flat linear address space.

Can you clarify?

Modern CPUs do out-of-order execution, which means they need to identify and resolve register sharing dependencies between instructions. This turns the notional linear model of random-access registers into a DAG in practice, where different instructions that might be in flight at once actually read from or write to different "versions" of a named register. Additionally, pretty much every modern CPU uses a register renaming scheme, where the register file at microarchitecture level is larger than that described in the software-level architecture reference, i.e. one instruction's "r7" has no relationship at all to another's r7".

Caches aren't quite as mix-and-match, but they can still internally manage different temporal versions of a cache line, as well as (hopefully) mask the fact that a write to DRAM from one core isn't an atomic operation instantly visible to all other cores.

Practice is always more complicated than theory.

That doesn't affect what I said though. Register renaming and pipelining does not make mutation go away and doesn't allow you to work on multiple things "at once" through the same pointer.

It's still logically the same thing with these optimizations, obviously -- since they aren't supposed to change the logic.

Realistically, the compiler is building a DAG called SSA; and then the CPU builds a DAG to do out of order execution, so at a fine grain -- the basic block -- it seems to me that the immutable way of thinking about things is actually closer to the hardware.