I don't like the title of this paper, since most people in this space probably think of language models not as producing a distribution (wrt which they are indeed invertible, which is what the paper claims) but as producing tokens (wrt which they are not invertible [0]). Also the author contribution statement made me laugh.
Spoiler but for those who do not want to open the paper, the contribution statement is:
"Equal contribution; author order settled via Mario Kart."
If only more conflicts in life would be settled via Mario Kart.
Envisioning our most-elderly world leaders throwing down in Mario Kart, fighting for 23rd and 24th place as they bump into walls over and over and struggle to hold their controllers properly… well it’s a very pleasant thought.
Yeh it would be fun if we could reverse engineer the prompts from auto generated blog posts. But this is not quite the case.
Still, it is technically correct. The model produces a next-token likelihood distribution, then you apply a sampling strategy to produce a sequence of tokens.
Depends on your definition of the model. Most people would be pretty upset with the usual LLM providers if they drastically changed the sampling strategy for the worse and claimed to not have changed the model at all.
Tailoring the message to the audience is really a fundamental principle of good communication.
Scientists and academics demand an entirely different level of rigor compared to customers of LLM providers.
Sure, but they went slightly overboard with that headline and they knew it. But oh well, they have a lot of eyes and discussion on their paper so it's a success.
I feel like, if the feedback to your paper is "this is over-done / they claim more than they prove / it's kinda hype-ish" you're going to get less references in future papers.
That would seem to be counter to the "impact" goal for research.
Fair enough, that might be more my personal opinion instead of sound advice for successful research. Also I understand that you have a very limited amount of time to get your research noticed in this topic. Who knows if it's relevant two years down the line.
LLM providers are in the stone age with sampling today and it's on purpose because better sampling algorithms make the diversity of synthetic generated data too high, thus meaning your model is especially vulnerable to distillation attacks.
This is why we use top_p/top_k on the big 3 closed source models despite min_p and far better LLM sampling algorithms existing since 2023 (or in TFS case, since 2019)
So, scientists came up with a very specific term for a very specific function, its meaning got twisted by commercial actors and the general public, and it's now the scientists' fault if they keep using it in the original, very specific sense?
I agree it is technically correct, but I still think it is the research paper equivalent of clickbait (and considering enough people misunderstood this for them to issue a semi-retraction that seems reasonable)
I disagree. Within the research community (which is the target of the paper), that title means something very precise and not at all clickbaity. It's irrelevant that the rest of the Internet has an inaccurate notion of "model" and other very specific terms.
In a field with as much public visibility as this one it is naive to only think of the academic target audience, especially when choosing a title like this. As a researcher you are responsible for communicating your findings both to other experts and to outsiders, and that includes choosing appropriate titles. (Though i think we fundamentally disagree about the role of researchers here) It's like writing a title that says "drinking only 200ml of water a day leads to weight loss" which is technically true, but misleading.
But I bet you could reconstruct a plausible set of distributions by just rerunning the autoregression on a given text with the same model. You won't invert the exact prompt but it could give you a useful approximation.