Was it hallucinating here, or are the commenters hallucinating? What OP is saying is just not true. A CT scan and normal daily commute in Grand Central station are NOT comparable in terms of radiation received. Somehow this is controversial because an AI said it?
The machine appears to have hallucinated the incomparable comparison, instead of a human.
(And I'm not picking on the machine at all here. I use it all the time. At first, I used to treat it like an idiot intern that shouldn't have been hired at all: Creative and full of spirit, but untrustworthy and all ideas need to be filtered. But lately, it's more like an decent apprentice who has a hangover and isn't thinking straight today. The machine has been getting better as time presses on, but it still goes rather aloof from time to time.)
What do you mean all LLM output is hallucination? Would you say the same about AlphaGo? That system was also trained to predict human data initially yet it's competent to the point of beating most humans in Go.
> Weird you don't have this requirement for the OP spewing his urban myths above.
It isn't my purpose try to convince you that your apparent presumption that the output of a human and a machine are somehow equivalent and should be treated equally is wrong.
Why is an LLM more prone to hallucination than AlphaGo?
> It isn't my purpose try to convince you that your apparent presumption that the output of a human and a machine are somehow equivalent and should be treated equally is wrong.
You should judge arguments by their merits, not by who is saying them.
Human hallucinations are natural.
Machine hallucinations are avoidable.
Was it hallucinating here, or are the commenters hallucinating? What OP is saying is just not true. A CT scan and normal daily commute in Grand Central station are NOT comparable in terms of radiation received. Somehow this is controversial because an AI said it?
The machine appears to have hallucinated the incomparable comparison, instead of a human.
(And I'm not picking on the machine at all here. I use it all the time. At first, I used to treat it like an idiot intern that shouldn't have been hired at all: Creative and full of spirit, but untrustworthy and all ideas need to be filtered. But lately, it's more like an decent apprentice who has a hangover and isn't thinking straight today. The machine has been getting better as time presses on, but it still goes rather aloof from time to time.)
I don't understand how was the machine hallucinating?
Are you new to LLMs?
They hallucinate shit all the time. It might even be said that all LLM output is a hallucination (but some of these hallucinations are useful).
It was NOT hallucinating this time. OP was.
What do you mean all LLM output is hallucination? Would you say the same about AlphaGo? That system was also trained to predict human data initially yet it's competent to the point of beating most humans in Go.
Is AlphaGo an LLM?
Anyway, we were here:
> Weird you don't have this requirement for the OP spewing his urban myths above.
It isn't my purpose try to convince you that your apparent presumption that the output of a human and a machine are somehow equivalent and should be treated equally is wrong.
So I won't.
Why is an LLM more prone to hallucination than AlphaGo?
> It isn't my purpose try to convince you that your apparent presumption that the output of a human and a machine are somehow equivalent and should be treated equally is wrong.
You should judge arguments by their merits, not by who is saying them.
You should have arguments, not gish gallops.
I'm out.
The LLM had the argument that OP's comparison was off. Which you would see if you didn't have a stick up your arse about AI.