> Yes, most people who have an incentive in pushing AI say that hallucinations aren't a problem, since humans aren't correct all the time.

Humans often make factual errors, but there's a difference between having a process to validate claims against external reality, and occasionally getting it wrong, and having no such process, with all output being the product of internal statistical inference.

The LLM is engaging in the same process in all cases. We're only calling it a "hallucination" when its output isn't consistent with our external expectations, but if we regard "hallucination" as referring to any situation where the output for a wholly endogenous process is mistaken for externally validated information, then LLMs are only ever hallucinating, and are just designed in such a way that what they hallucinate has a greater than chance likelihood of representing some external reality.