Are you new to LLMs?
They hallucinate shit all the time. It might even be said that all LLM output is a hallucination (but some of these hallucinations are useful).
Are you new to LLMs?
They hallucinate shit all the time. It might even be said that all LLM output is a hallucination (but some of these hallucinations are useful).
It was NOT hallucinating this time. OP was.
What do you mean all LLM output is hallucination? Would you say the same about AlphaGo? That system was also trained to predict human data initially yet it's competent to the point of beating most humans in Go.
Is AlphaGo an LLM?
Anyway, we were here:
> Weird you don't have this requirement for the OP spewing his urban myths above.
It isn't my purpose try to convince you that your apparent presumption that the output of a human and a machine are somehow equivalent and should be treated equally is wrong.
So I won't.
Why is an LLM more prone to hallucination than AlphaGo?
> It isn't my purpose try to convince you that your apparent presumption that the output of a human and a machine are somehow equivalent and should be treated equally is wrong.
You should judge arguments by their merits, not by who is saying them.
You should have arguments, not gish gallops.
I'm out.
The LLM had the argument that OP's comparison was off. Which you would see if you didn't have a stick up your arse about AI.