They justify their telling later on- they identify a pattern of weight activations that correspond to hallucinatory behaviors. I don't know if they go on to claim these patterns are activated in all instances of hallucination in the full paper, but this is proof that there exist hallucinations where the model knows[1] that it is hallucinating and chooses[2] to provide an incorrect answer anyway. At least some hallucination arises from the model's "personality".

[1] ie. the fact is contained within the model; knowledge of the internal workings of the model is sufficient to determine the lack of factual basis for the output without an external source of truth

[2] ie. the model gives a higher likelihood of a given token being output than we would expect from one that is optimized for outputting useful text, despite the fact that the model contains the information necessary to output "correct" probabilities