What this paper suggests is that LLM hidden states actually preserve inputs with high semantic fidelity. If that’s the case, then the real distortion isn’t inside the network, it’s in the optimization trap at the decoding layer, where rich representations get collapsed into outputs that feel synthetic or generic. In other words, the math may be lossless, but the interface is where meaning erodes.