I was rather explicit about that, you memorize them from trusted sources (or directly observe them). There's no question. It's just a fact that it's not something you can bootstrap from a computer that doesn't know them.
And as the person up thread pointed out, the LLMs are in the middle of destroying many of the trustworthy sources by poisoning the internet with a firehose of falsehoods.
It's all about trust. How do we help machines (and humans) know what to trust?
See Tom Scott’s rather prescient lecture to the Royal Society titled, “There is No Algorithm for Truth”.
We can't help humans figure out who/what to trust. Are chances with machines are slim.