The entire point of the article is that LLMs cannot make accurate text, but ironically you claiming LLMs can do accurate texts illustrates your point about human reliability perfectly.

I guess the conclusion is there simply is no avenues to gain knowledge.