In layman's terms, this seems to mean that given a certain unedited LLM output, plus complete information about the LLM, they can determine what prompt was used to create the output. Except that in practice this works almost never. Am I understanding correctly?

No, it's about the distribution being injective, not a single sampled response. So you need a lot of outputs of the same prompt, and know the LLM, and then you should in theory be able to reconstruct the original prompt.

[deleted]

No, it says nothing about LLM output being invertible.