"Hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature".
I used to avidly read all his stuff, and I remember 20ish years ago he decided to rename Inversion of Control to Dependency Injection. In doing so, and his accompany blog, he showed he didn't actually understand it at a deep level (and hence his poor renaming).
This feels similar. I know what he's trying to say, but he's just wrong. He's trying to say the LLM is hallucinating everything, but Fowler is missing is that Hallucination in LLM terms refers to a very specific negative behavior.
No, it’s just an attempt to pretend wrong outputs are some special case when really they aren’t. They aren’t imagine something that doesn’t exist they are just running the same process they do for everything else and it just didn’t work.
If you disagree then I would ask what exactly is the “specific behaviour” you’re talking about?
As far as an LLM is concerned, there is no difference between "negative" hallucination and a positive one. It's all just tokens and embeddings to it.
Positive hallucinations are more likely to happen nowadays, thanks to all the effort going into these systems.
This basically ruins the term “hallucination” and makes it meaningless, when the term actually describes a real phenomenon.
That's the point. It is meaningless. When it first coined, there were already detractors to the term, that it is an incorrect description of the phenomenon. But it stuck.
I'll take the bait.
What didn't he understand properly about inversion of control then.
That it is ultimately about object lifetime management. Dependency injection is a big part of it, but not the only thing.