You're as bad as the lazy incompetent journalists. Just read the post instead of asking questions and pretending to be skeptical instead of too lazy to read the article this discussion is about.

Then you would be fully aware that the person who the quotes are attributed to has stated very clearly and emphatically that he did not say those things.

Are you implying he is an untrustworthy liar about his own words, when you claim it's impossible to prove they're not hallucinations?

There is a third option: The journalist who wrote the article made the quotes up without an LLM.

I think calling the incorrect output of an LLM a “hallucination” is too kind on the companies creating these models even if it’s technically accurate. “Being lied to” would be more accurate as a description for how the end user feels.

The journalist was almost certainly using an LLM, and a cheap one at that. The quote reads as if the model was instructed to build a quote solely using its context window.

Lying is deliberately deceiving, but yeah, to a reader, who in a effect is a trusting customer who pays with part of their attention diverted to advertising support, broadcasting a hallucination is essentially the same thing.

I think you're missing their point. The question you're replying to is, how do we know that this made up content is a hallucination. Ie., as opposed to being made up by a human. I think it's fairly obvious via Occam's Razor, but still, they're not claiming the quotes could be legit.

The point is they keep making excuses for not reading the primary source, and are using performative skepticism as a substitute for basic due diligence.

Vibe Posting without reading the article is as lazy as Vibe Coding without reading the code.

You don’t need a metaphysics seminar to evaluate this. The person being quoted showed up and said the quotes attributed to him are fake and not in the linked source:

https://infosec.exchange/@mttaggart/116065340523529645

>Scott Shambaugh here. None of the quotes you attribute to me in the second half of the article are accurate, and do not exist at the source you link. It appears that they themselves are AI hallucinations. The irony here is fantastic.

So stop retreating into “maybe it was something else” while refusing to read what you’re commenting on. Whether the fabrication came from an LLM or a human is not your get-out-of-reading-free card -- the failure is that fabricated quotes were published and attributed to a real person.

Please don’t comment again until you’ve read the original post and checked the archived Ars piece against the source it claims to quote. If you’re not willing to do that bare minimum, then you’re not being skeptical -- you’re just being lazy on purpose.

You seem to be quite certain that I had not read the article, yet I distinctly remember doing do.

By what proceess do you imagine I arrived at the conclusion that the article suggested that published quotes were LLM hallucinations when that was not mentioned in the article title?

You accuse me of performative skepticism, yet all I think is that it is better to have evidence over assumptions, and it is better to ask if that evidence exists.

It seems a much better approach than making false accusations based upon your own vibes, I don't think Scott Shambaugh went to that level though.