> people aren't aware of how wrong they can be, and the errors take effort and knowledge to notice.

I have friends who are highly educated professionals (PhDs, MDs) who just assume that AI\LLMs make no mistakes.

They were shocked that it's possible for hallucinations to occur. I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?

Computers are always touted as deterministic machines. You can't argue with a compiler, or Excel's formula editor.

AI, in all its glory, is seen as an extension of that. A deterministic thing which is meticulously crafted to provide an undisputed truth, and it can't make mistakes because computers are deterministic machines.

The idea of LLMs being networks with weights plus some randomness is both a vague and too complicated abstraction for most people. Also, companies tend to say this part very quietly, so when people read the fine print, they get shocked.

> I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?

I think it's just that LLMs are modeling generative probability distributions of sequences of tokens so well that what they actually are nearly infallible at is producing convincing results. Often times the correct result is the most convincing, but other times what seems most convincing to an LLM just happens to also be most convincing to a human regardless of correctness.

https://en.wikipedia.org/wiki/ELIZA_effect

> In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.

Its complete bullshit. There is no way anyone ever thought anything was going on in ELIZA. There were people amazed that "someone could program that" but they had no illusions about what it was, it was obvious after 3 responses.

Don't be so sure. It was 1966, and even at a university, few people had any idea what a computer was capable of. Fast forward to 2025...and actually, few people have any idea what a computer is capable of.

[deleted]

If I wasn't familiar with the latest in computer tech, I would also assume LLMs never make mistakes, after hearing such excited praise for them over the last 3 years.

It is only in the last century or so, that statistical methods were invented and applied. It is possible for many people to be very competent at what they are doing and at the same time be totally ignorant of statistics.

There are lies, statistics and goddamn hallucinations.

My experience, speaking over a scale of decades, is that most people, even very smart and well-educated ones, don't know a damn thing about how computers work and aren't interested in learning. What we're seeing now is just one unfortunate consequence of that.

(To be fair, in many cases, I'm not terribly interested in learning the details of their field.)

Have they never used it? Majority of the responses that I can verify are wrong. Sometimes outright nonse, sometimes believable. Be it general knowledge or something where deeper expertise is required.

I worry that the way the models "Speak" to users, will cause users to drop their 'filters' about what to trust and not trust.

We are barely talking modern media literacy, and now we have machines that talk like 'trusted' face to face humans, and can be "tuned" to suggest specific products or use any specific tone the owner/operator of the system wants.

> I have friends who are highly educated professionals (PhDs, MDs) who just assume that AI\LLMs make no mistakes.

Highly educated professionals in my experience are often very bad at applied epistemology -- they have no idea what they do and don't know.