Depends on your definition of the model. Most people would be pretty upset with the usual LLM providers if they drastically changed the sampling strategy for the worse and claimed to not have changed the model at all.

Tailoring the message to the audience is really a fundamental principle of good communication.

Scientists and academics demand an entirely different level of rigor compared to customers of LLM providers.

Sure, but they went slightly overboard with that headline and they knew it. But oh well, they have a lot of eyes and discussion on their paper so it's a success.

I feel like, if the feedback to your paper is "this is over-done / they claim more than they prove / it's kinda hype-ish" you're going to get less references in future papers.

That would seem to be counter to the "impact" goal for research.

Fair enough, that might be more my personal opinion instead of sound advice for successful research. Also I understand that you have a very limited amount of time to get your research noticed in this topic. Who knows if it's relevant two years down the line.

LLM providers are in the stone age with sampling today and it's on purpose because better sampling algorithms make the diversity of synthetic generated data too high, thus meaning your model is especially vulnerable to distillation attacks.

This is why we use top_p/top_k on the big 3 closed source models despite min_p and far better LLM sampling algorithms existing since 2023 (or in TFS case, since 2019)

So, scientists came up with a very specific term for a very specific function, its meaning got twisted by commercial actors and the general public, and it's now the scientists' fault if they keep using it in the original, very specific sense?