>Modelling text describing the world is not modelling (some aspect) of the world?
The text describes the world to humans. This is the crucial thing that you miss. It is very subjective.
Imagine that you learn the grammar of a foreign language without learning the meaning of the words. You might be able to make grammatically valid sentences. But you will still will not understand a single thing that something written in that language describes. But that will be perfectly clear to someone who actually understand the meaning of the words.
When you train LLMs on large volumes of text that describe logically consistent facts in a million different ways, the "logic" sort of becomes part of the grammer that the model learns. That is logic becomes a higher kind of "grammer" or a enormous set of grammatical rules that it captures. But that does not mean the model can do actual logic.
Thanks for your explanation, I find it much more intuitive than the paper's.
In your opinion, does a Calculus solver model certain aspects of the world?