It’s a human articulation problem.
When it receives a generic vague input it is free to interpret according to how its corpus fires like any human interaction.
How to articulate better is like writing a sentence that will stand the test of model updates.
Even then. I don’t have an example off the top of my head but even perfectly clear sentences can lead the agent to strange places. Even between humans, miscommunication is easy, but then anyone sensible would ask for confirmation if their interpretation is weird. But the LLM very rarely questions the user.
I don’t think it’s fair to blame the user here. The tool must be operated by normal users.