I was an LLM naysayer for a long time. During that time I would have agreed with you. Recent experiences have changed my mind. The accuracy I get from models does not suffer from the problems you describe, and many of the issues you're describing are also true, in different ways, of human beings. There's never any guarantee that any of the text you or I produce will be accurate, or that our summary of it will be accurate, but if you ask us to generate text, we will. It recalls that funny meme: "Your job application says you're fast at math. What's 513 * 487?" "39,414." "That's not even close." "But it is fast."