The issue is that in domains novel to the user they do not know what is trivially false or a non sequitur and the LLM will not help them filter these out.

If LLMs are to be valuable in novel areas then the LLM needs to be able to spot these issues and ask clarifying questions or otherwise provide the appropriate corrective to the user's mental model.