Right, all models are inherently wrong. It's up to the user know about its limits / uncertainty.
But i think this 'being wrong' is kind of confusing when talking about LLMs (in contrast to systems/scientific modelling).
In what they model (language), the current LLMs are really good and acurate, except for example the occasional chinese character in the middle of a sentence.
But what we mean by LLMs 'being wrong' most of the time is being factually wrong in answering a question, that is expressed as language. That's a layer on top of what the model is designed to model.
EDITS:
So saying 'the model is wrong' when it's factually wrong above the language level isn't fair.
I guess this is essentially the same thought as 'all they do is hallucinate'.
Right, all models are inherently wrong. It's up to the user know about its limits / uncertainty.
But i think this 'being wrong' is kind of confusing when talking about LLMs (in contrast to systems/scientific modelling). In what they model (language), the current LLMs are really good and acurate, except for example the occasional chinese character in the middle of a sentence.
But what we mean by LLMs 'being wrong' most of the time is being factually wrong in answering a question, that is expressed as language. That's a layer on top of what the model is designed to model.
EDITS:
So saying 'the model is wrong' when it's factually wrong above the language level isn't fair.
I guess this is essentially the same thought as 'all they do is hallucinate'.
Generally attributed to George Box