> If a human being talked confidently about something that they were just making up out of thin air by synthesizing based (consciously or unconsciously) on other information they know you wouldn’t call it “hallucination”: you’d call it “bullshit”.
I'd recommend you watch https://www.youtube.com/watch?v=u9CE6a5t59Y&t=2134s&pp=ygUYc... which covers the topic of bullshit. I don't think we can call LLM output "bullshit" because someone spewing bullshit has to not care about whether what they're saying is true or false. LLMs don't "care" about anything because they're not human. It's better to give it an alternative term to differentiate it from the human behaviour, even if the observed output is recognisable.
It's precisely because they can't care that they are by definition bullshit machines. See https://link.springer.com/article/10.1007/s10676-024-09775-5
I disagree with the article’s thesis completely. Humans are the ones that spread the bullshit, the LLM just outputs text. Humans are the necessary component to turn that text from “output” into “bullshit.” The machine can’t do it alone.