I’ve never liked that this behaviour is described using the term “hallucination”.

If a human being talked confidently about something that they were just making up out of thin air by synthesizing based (consciously or unconsciously) on other information they know you wouldn’t call it “hallucination”: you’d call it “bullshit”.

And, honestly, “bullshit” is a much more helpful way of thinking about this behaviour because it somewhat nullifies the arguments people make against the use of LLMs due to this behaviour. Fundamentally, if you don’t want to work with LLMs because they sometimes “bullshit”, are you planning on no longer working with human beings as well?

It doesn’t hold up.

But, more than that, going back to your point: it’s much harder to redefine the term “bullshit” to mean something different to the common understanding.

All of that said, I don’t mind the piece and, honestly, the “I haven’t the foggiest” comment about the future of software development as a career is well made. I guess it’s just a somewhat useful collection of scattered thoughts on LLMs and, as such, an example of a piece where the “thoughts on” title fits well. I don’t think the author is trying to be particularly authoritative.

> I’ve never liked that this behaviour is described using the term “hallucination”.

I have a standard canned rant about "confabulation" is a much better metaphor, but it wasn't the point I was focussed on here.

> Fundamentally, if you don’t want to work with LLMs because they sometimes “bullshit”, are you planning on no longer working with human beings as well?

I will very much not voluntarily rely on a human for particular tasks if that human has demonstrated a pattern of bullshitting me when given that kind of task, yes, especially if, on top of the opportunity cost inherent in relying on a person for a particular task, I am also required to compensate them—e.g., financially—for their notional attention to the task.

> If a human being talked confidently about something that they were just making up out of thin air by synthesizing based (consciously or unconsciously) on other information they know you wouldn’t call it “hallucination”: you’d call it “bullshit”.

I'd recommend you watch https://www.youtube.com/watch?v=u9CE6a5t59Y&t=2134s&pp=ygUYc... which covers the topic of bullshit. I don't think we can call LLM output "bullshit" because someone spewing bullshit has to not care about whether what they're saying is true or false. LLMs don't "care" about anything because they're not human. It's better to give it an alternative term to differentiate it from the human behaviour, even if the observed output is recognisable.

It's precisely because they can't care that they are by definition bullshit machines. See https://link.springer.com/article/10.1007/s10676-024-09775-5

I disagree with the article’s thesis completely. Humans are the ones that spread the bullshit, the LLM just outputs text. Humans are the necessary component to turn that text from “output” into “bullshit.” The machine can’t do it alone.

At the risk of sounding woo, I find some parallels in how LLMs work to my experiences with meditation and writing. My subjective experience of it is that there is some unconscious part of my brain that supplies a scattered stream of words as the sentence forms --- without knowing the neuroscience of it, I could speculate it is a "neurological transformer", some statistical model that has memorised a combination of the grammar and contextual semantic meaning of language.

The difference is that the LLM is _only that part_. In producing language as a human, I filter these words, I go back and think of new phrasings, I iterate --- in writing consciously, in speech unconsciously. So rather than a sequence it is a scattered tree filled with rhetorical dead ends, pruned through interaction with my world-model and other intellectual faculties. You can pull on one thread of words as though it were fully-formed already as a kind of Surrealist exercise (like a one-person cadavre exquis), and the result feels similar to an LLM with the temperature turned up too high.

But if nothing else, this highlights to me how easily the process of word generation may be decoupled from meaning. And it serves to explain another kind of common human experience, which feels terribly similar to the phenomenon of LLM hallucination: the "word vomit" of social anxiety. In this process it suddenly becomes less important that the words you produce are anchored to truth, and instead the language-system becomes tuned to produce any socially plausible output at all. That seems to me to be the most apt analogy.

"Bullshit engine" is the term that best explains to a lay person what it is that LLMs do.