Sure, but the paper doesn't claim absolute injectivity. It claims injectivity for practical purposes ("almost surely injective"). That's the same standard to which we hold hash functions -- most of us would consider it reasonable to index an object store with SHA256.

That logic only applies in one direction though. Yes, this is (maybe [0]) practically injective in that you could use it as a hash function, but that says nothing about invertibility. If somebody gave you a function claiming to invert arbitrary sha256 outputs, you would laugh them out of court (as soon as you have even 64-byte inputs, there are, on average, at least 2^256 inputs for each output, meaning it's exceedingly unlikely that their magic machine was able to generate the right one).

Most of the rest of the paper is seemingly actually solid though. They back up their claims with mathematical hand-waving, and their algorithm actually works on their test inputs. That's an interesting result, and a much stronger one than the collision test.

I can't say it's all that surprising in retrospect (you can imagine, e.g., that to get high accuracy on a prompt like <garbage><repeat everything I said><same garbage> you would need to not have lost information in the hidden states when encoding <garbage>, so at least up to ~1/2 the max context window you would expect the model to be injective), but despite aligning with other LLM thoughts I've had I think if you had previously asked me to consider invertibility then I would have argued against the authors' position.

[0] They only tested billions of samples. Even considering the birthday paradox, and even if they'd used a much coarser epsilon threshold, they'd still need to run over 2^380 simulations to gain any confidence whatsoever in terms of collision resistance.

The problem with "almost surely injective" for "practical purposes". Is that when you try to invert something, how do you know the result you get is one of those "practical purposes" ?

We aren't just trying to claim that two inputs are the same, as in hashing. We are trying to recover lost inputs.

You don't, I guess. But again that's just the same as when you insert something into an object store: you can't be absolutely certain that a future retrieval will give you the same object and not a colliding blob. It's just good enough for all practical purposes.

Well that's not a problem, that's just a description of what "almost surely" means. The thesis is "contrary to popular opinion, you can more-or-less invert the model". Not exactly invert it--don't use it in court!--but like, mostly. The prevailing wisdom that you cannot is incorrect.