Text completer is still the most apt allegory. If you ask it to open a file and do something with it, often it wont open the file and just text complete the output. That failure mode never happens in any sort of human, it just happens because its a text completer.
Many people hate when you don't anthropomorphize LLM though, but that is the best way to understand how they can fail so spectacularly in ways we would never expect a human to fail.
Except many of us not having the problems you are having. I don't have LLMs "fail so spectacularly".
Interestingly, I have friends who aren't coders who use LLMs for various personal needs, and they run into the same kind of problems you are describing. 100% of the time, i've found that it's that they do not understand how to work with an LLM. Once i help them, they start getting better results. I do not have any need to anthropomorphize an LLM. I do however understand that I can use natural language to get quite complex and yes ACCURATE results from AI, IF i know what i'm doing and how to ask for it. It's just a tool, not a person.