Except many of us not having the problems you are having. I don't have LLMs "fail so spectacularly".
Interestingly, I have friends who aren't coders who use LLMs for various personal needs, and they run into the same kind of problems you are describing. 100% of the time, i've found that it's that they do not understand how to work with an LLM. Once i help them, they start getting better results. I do not have any need to anthropomorphize an LLM. I do however understand that I can use natural language to get quite complex and yes ACCURATE results from AI, IF i know what i'm doing and how to ask for it. It's just a tool, not a person.