Nah they have definitely reduced massively. I suspect that's just because as models get more powerful their answers are just more likely to be true rather than hallucinations.
I don't think anyone has found any new techniques to prevent them. But maybe we don't need that anyway if models just get so good that they naturally don't hallucinate much.
Not in my experience. For example models often say "no you can't do that" now whereas they used to always say you could do things and just hallucinate something if it was impossible. Of course sometimes they say you can't do things when you can (e.g. ChatGPT told me you can't execute commands at the top level of a Makefile a few months ago), but it's definitely less.
> They're just not as egregious.
Uhm yeah, that's what I'm saying. The hallucination situation has improved.
Yep, now there are way more sophisticated. Same amount, though.
Nah they have definitely reduced massively. I suspect that's just because as models get more powerful their answers are just more likely to be true rather than hallucinations.
I don't think anyone has found any new techniques to prevent them. But maybe we don't need that anyway if models just get so good that they naturally don't hallucinate much.
That's because they're harder to spot, not because there are less. In my field I still see the same amount. They're just not as egregious.
Not in my experience. For example models often say "no you can't do that" now whereas they used to always say you could do things and just hallucinate something if it was impossible. Of course sometimes they say you can't do things when you can (e.g. ChatGPT told me you can't execute commands at the top level of a Makefile a few months ago), but it's definitely less.
> They're just not as egregious.
Uhm yeah, that's what I'm saying. The hallucination situation has improved.
They haven't reduced one bit in my experience.