> I’ve never liked that this behaviour is described using the term “hallucination”.

I have a standard canned rant about "confabulation" is a much better metaphor, but it wasn't the point I was focussed on here.

> Fundamentally, if you don’t want to work with LLMs because they sometimes “bullshit”, are you planning on no longer working with human beings as well?

I will very much not voluntarily rely on a human for particular tasks if that human has demonstrated a pattern of bullshitting me when given that kind of task, yes, especially if, on top of the opportunity cost inherent in relying on a person for a particular task, I am also required to compensate them—e.g., financially—for their notional attention to the task.