> Hallucinations are not a completely known or well understood yet.

Is that true? Is it anything more complicated than LLMs producing text optimized for plausibility rather than for any sort of ground version of truth?

No, it's nothing more than that, and that is the most frustrating. I agree with you on the other comment (https://news.ycombinator.com/item?id=44777760#44778294) and a confidence metric or a simple "I do not know" could fix a lot of the hallucination.

In the end, <current AI model> is driven towards engagement and delivering an answer and that drives it towards generating false answers when it doesn't know or understand.

If it was more personality controlled, delivering more humble and less confident answers or even making it say that it doesn't know would be a lot easier.