If you look at the failure modes, they very closely resemble the failure modes of humans in equivalent situations. I'd say that, in practice, anthropomorphic view is actually the most informative we have about failure modes.

>they very closely resemble the failure modes of humans in equivalent situations

I don't think they do if we are talking about a honest human being.

LLMs will happily hallucinate and even provide "sources" for their wrong responses. That single thing should contradict what you are saying.