To some degree *all* LLM's answers are made up facts. For stuff that is abundantly present in training data those are almost always correct. For topics which are not common knowledge (allow for a great variability) you should always check.
I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".
> I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".
That is almost exactly what they are and what you should treat them as.
A lossy compressed corpus of publicly available information with a weight of randomness. The most fervent skeptics like to call LLMs "autocorrect on steroids" and they are not really wrong.
An LLM is an autocorrect in as much as humans are replicators. Something seriously gets lost in this "explanation".
Humans do much more than replicate, that is one function we have of many.
What does an LLM do, other than output a weighted prediction of tokens based on its training database? Everything you can use an LLM for is a manipulation of that functionality.
> An LLM is an autocorrect in as much as humans are replicators.
an autocorrect... on steroids.
What are humans, fundamentally, then ?
That is a good questions and I guess we have good progress since Plato whose definition was - A man is a featherless biped.
But I think we still do not know.
Old Sci-Fi AI used to be an entity which have a hard facts database and was able to instantly search it.
I think that's the right direction for modern AI to move. ChatGPT uses Google searches often. So replace Google with curated knowledge database, train LLM to consult this database for every fact and hallucinations will be gone.