Intel did a spectacularly poor job with the ecosystem around the memory cells. They made two plays, and both were flops.

1. “Optane” in DIMM form factor. This targeted (I think) two markets. First, use as slower but cheaper and higher density volatile RAM. There was actual demand — various caching workloads, for example, wanted hundreds of GB or even multiple TB in one server, and Optane was a route to get there. But the machines and DIMMs never really became available. Then there was the idea of using Optane DIMMs as persistent storage. This was always tricky because the DDR interface wasn’t meant for this, and Intel also seems to have a lot of legacy tech in the way (their caching system and memory controller) and, for whatever reason, they seem to be barely capable of improving their own technology. They had multiple serious false starts in the space (a power-supply-early-warning scheme using NMI or MCE to idle the system, a horrible platform-specific register to poke to ask the memory controller to kindly flush itself, and the stillborn PCOMMIT instruction).

2. Very nice NVMe devices. I think this was more of a failure of marketing. If they had marketed a line of SSDs that, coupled with an appropriate filesystem, could give 99% fsync latency of 5 microseconds and they had marketed this, I bet people would have paid. But they did nothing of the sort — instead they just threw around the term “Optane” inconsistently.

These days one could build a PCM-backed CXL-connected memory mapped drive, and the performance might be awesome. Heck, I bet it wouldn’t be too hard to get a GPU to stream weights directly off such a device at NVLink-like speeds. Maybe Intel should try it.

One of the many problems was trying to limit the use of Optane to Intel devices. They should have manufactured and sold Optane memory and let other players build on top of it at a low level.

> Optane memory

Which “Optane memory”? The NVMe product always worked on non-Intel. The NVDIMM products that I played with only ever worked on a very small set of rather specialized Intel platforms. I bet AMD could have supported them about as easily as Intel, and Intel barely ever managed to support them.

The consumer "Optane memory" products were a combination of NVMe and Intel's proprietary caching software, the latter of which was locked to Intel's platforms. They also did two generations of hybrid Optane+QLC drives that only worked on certain Intel platforms, because they ran a PCIe x2+x2 pair of links over a slot normally used for a single X2 or x4 link.

Yes, the pure-Optane consumer "Optane memory" products were at a hardware level just small, fast NVMe drives that could be use anywhere, but they were never marketed that way.

Exactly. I happen to have all AMD sitting around here, and buying my first Optane devices was a gamble, because I had no idea if they'd work. Only reason I ever did, is they got cheap at one point and I could afford the gamble.

That uncertainty couldn't have done the market any favors.

I feel like this is proving my point. You can’t read “Optane” and have any real idea of what you’re buying.

Also… were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludges in the “Rapid Storage” family where some secret sauce in the PCIe host lied to the OS about what was actually connected so an Intel driver could replace the OS’s native storage driver (NVMe, AHCI, or perhaps something worse depending on generation) to implement all the actual logic in software?

It didn’t help Intel that some major storage companies started selling very, very nice flash SSDs in the mean time.

> were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludges

They were definitely part of the series of massive kludges. But aside from the Intel platforms they were marketed for, I never found a PCIe host that could see both of the NVMe devices on the drive. Some hosts would bring up the x2 link to the Optane half of the drive, some hosts would bring up the x2 link to the QLC half of the drive, but I couldn't find any way to get both links active even when the drive was connected downstream of a PCIe switch that definitely had hardware support for bifurcation down to x2 links. I suspect that with appropriate firmware hacking on the host side, it may have been possible to get those drives fully operational on a non-Intel host.