> I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?
I think it's just that LLMs are modeling generative probability distributions of sequences of tokens so well that what they actually are nearly infallible at is producing convincing results. Often times the correct result is the most convincing, but other times what seems most convincing to an LLM just happens to also be most convincing to a human regardless of correctness.
https://en.wikipedia.org/wiki/ELIZA_effect
> In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
Its complete bullshit. There is no way anyone ever thought anything was going on in ELIZA. There were people amazed that "someone could program that" but they had no illusions about what it was, it was obvious after 3 responses.
Don't be so sure. It was 1966, and even at a university, few people had any idea what a computer was capable of. Fast forward to 2025...and actually, few people have any idea what a computer is capable of.