"Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind."
Modelling text describing the world is not modelling (some aspect) of the world?
Modelling the probability that a reader likes or dislike a piece of text is not modelling (some aspect) of a reader's state of mind?
>Modelling text describing the world is not modelling (some aspect) of the world?
The text describes the world to humans. This is the crucial thing that you miss. It is very subjective.
Imagine that you learn the grammar of a foreign language without learning the meaning of the words. You might be able to make grammatically valid sentences. But you will still will not understand a single thing that something written in that language describes. But that will be perfectly clear to someone who actually understand the meaning of the words.
When you train LLMs on large volumes of text that describe logically consistent facts in a million different ways, the "logic" sort of becomes part of the grammer that the model learns. That is logic becomes a higher kind of "grammer" or a enormous set of grammatical rules that it captures. But that does not mean the model can do actual logic.
Thanks for your explanation, I find it much more intuitive than the paper's.
In your opinion, does a Calculus solver model certain aspects of the world?
No? There's no model involved. It's all just probabilistic. LLMs understand what you're thinking as well as a mood ring.
It isn't possible to have "just probabilistic" (maybe a philosophical exception could be made for a uniform random distribution or whatever provides the little dose of randomness required to get nondeterministic results). Probabilities are always in context of a model. LLMs model language but language itself is a model of something else. My money would have been on language modelling nonsense, but that is quite clearly not the case. Turns out it models the world and so do LLMs.
The model is the thing which is learned in order to make the probabilistic prediction with low entropy.
The literal definition of a model is "an informative representation of an object, person, or system". I think you mean something else though, what are you trying to express exactly?
Nothing about an LLM is “just”. In what precise sense do you mean it is probabilistic?
There's a reason stochastic was used in the original phrase instead of "probabilistic."
While most inference executions are intentionally non-deterministic, even a purely deterministic one would still be stochastic in that the model itself was built in a process such that the statistical frequency, sequencing, etc of the training text and followup processes all heavily influence the result.
Because of that, the output is the sort of thing that is not expected to generate 100% perfect output 100% of the time, but to have a good probability of being like-in-kind-to-the-training-data (and useful/relevant as a result).
(As compared to a non-stochastic model, like arithmetic on integers, where 2+2 is always gonna be 4 and you don't have a chance of coming up with some novel pair of inputs to addition that will cause your arithmetic to miss the mark.)