Sucking up does appear to be a personality trait. Hallucinations are not a completely known or well understood yet. We are past the stage that they're producing random outputs of strings. Frontier models can perform an imitation of reasoning but the hallucination aspect seems to be more towards an inability to learn past it's training data or properly update it's neural net learnings when new evidence is presented.

Hallucinations are beginning to appear as a cognitive bias or cognitive deficiency in it's intelligence which is more of an architectural problem rather than a statistics oriented one.

> Hallucinations are not a completely known or well understood yet.

Is that true? Is it anything more complicated than LLMs producing text optimized for plausibility rather than for any sort of ground version of truth?

No, it's nothing more than that, and that is the most frustrating. I agree with you on the other comment (https://news.ycombinator.com/item?id=44777760#44778294) and a confidence metric or a simple "I do not know" could fix a lot of the hallucination.

In the end, <current AI model> is driven towards engagement and delivering an answer and that drives it towards generating false answers when it doesn't know or understand.

If it was more personality controlled, delivering more humble and less confident answers or even making it say that it doesn't know would be a lot easier.