>they very closely resemble the failure modes of humans in equivalent situations

I don't think they do if we are talking about a honest human being.

LLMs will happily hallucinate and even provide "sources" for their wrong responses. That single thing should contradict what you are saying.