It is not LLM specific. The conclusion of the post states

> The web was already being poisoned for search and link ranking long before LLMs existed.

But it continues

> We are now plugging generative models directly into that poisoned pipeline and asking them to reason confidently about “truth” on our behalf.

So it's a shift from trust Google to trust the AI, which might be more insidious or not, depends on the individual attitude of each of us.

It's a shift but it's a little worse. Checking/auditing search results is easier and more ingrained; even if many people don't do it, everyone has been hit by spam at some point, everyone knows it exists.

LLMs are the same thing but have an air of authority about them that a web search lacks, at least for now.

I listen to a podcast. The hosts are not tech people. They don't know much about AI, but they play around with it to the extent that most people do. They're both media professionals with long careers in radio news. They closely follow the news, and are very aware of how LLMs hallucinate (and have experienced it themselves).

Recently one of them asked Gemini a very detailed question about some specific baseball stats and was exclaiming over the quality of the information he got back and how it would have been impossible or at least extremely difficult to find the information via a traditional search.

It wasn't until his cohost asked if he had verified the information that be realized no, he hadn't, he had just immediately taken it at face value.

I recognize this is a single anecdote, but I think it illustrates that there is a tendency to trust what an LLM gives you, when it's stated so factually and with so much detail -- even if you should know better.

To me that's the opposite. Whatever an LLM gives me, I view with skepticism. If I google sth then I quickly get a sense of how much I can trust it and what the BS factor is. I can refine my view in either case, but my a priori trust with an LLM is much lower.

Maybe we just need to work on training the general population to have a similar bias. (It will be harder than it sounds. Unbelievable amounts of capital are being bet on this not happening.)

In a discussion with my father-in-law about whether ChatGPT was trained on copyrighted materials, he literally asked ChatGPT and treated its response that it wasn't as useful evidence. He went to MIT, so he's arguably more educated than most people will ever be, so it's hard for me to be optimistic that trying to just explain this to people better will move the needle significantly.

Yes, it's the same for me, but we're not representative of most people I'm afraid.