One of the core problems we have in software engineering is the longstanding philosophical problem around creation of cohesive, consistent, objective mental models of inherently subjective concepts like identifying a person, place, etc. Look at the endless lists of falsehoods programmers (tend to) believe about any topic.

You’re right that LLMs specifically have no guarantees about accuracy nor veracity of the text they generate but I posit that that’s the same with people, especially when filtered through the socialization process. The difference is in the kind of errors machines make compared to ones that humans make.

It’s frustrating we’re using anthropomorphic concepts like hallucinations when describing LLM behaviors when the fundamental units of computation and thus failures of computation are so different at every level.

> but I posit that that’s the same with people,

> The difference is in the kind of errors machines make compared to ones that humans make.

There's another difference, and that is that other humans can learn and study that mental model (which is why "readable code" is a goal — the code is a physical manifestation of the model that you, the programmer, has to learn), and then the model can be tweaked and taught back to the original programmer, who can then think of that tweak in the future. Programming is inherently (in most cases) a collaborative art, because you're working with people to collectively develop a mental model and refine it, smoothing it down until (as Christopher Alexander said) there are no misfits between the model and the domain.