This mental model is also in direct contradiction to the whole purpose of the embedding, which is that the embedding describes the original text in a more interpretable form. If a piece of content in the original can be used for search, comparison etc., p much by definition it has to be stored in the embedding.

Similarly, this result can be rephrased as "Language Models process text." If the LLM wasn't invertible with regards to a piece of input text, it couldn't attend to this text either.