It's meant to contrast/correct the claim that

"LLMs produce either truth or they produce hallucinations."

Claims worded like that give the impression that if we can just reduce/eliminate the hallucinations, all that will remain will be the truth.

But that's not the case. What is the case is that all the output is the same thing, hallucination, and that some of those hallucinations just so happen to reflect reality (or expectations) so appears to embody truth.

It's like rolling a die and wanting one to come up, and when it does saying "the die knew what I wanted".

An infinite number of monkeys typing on an infinite number of typewriters will eventually produce the script for Hamlet.

That doesn't mean the monkeys know about Shakespeare, or Hamlet, or even words, or even what they are doing being chained to typewriters.

We've found a way to optimize the infinite typing monkeys to output something that passes for Hamlet much sooner than infinity.