As right as this may be, it elides the crucial difference between asking LLMs and all the other methods of asking questions you enumerated. The difference is not between the quality of information you might get from a friend or a blog versus an LLM. The difference is the centralization and feeding of the same poor quality information to massive numbers of people at scale. At least whatever bonkers theory someone "researches" on their own is going to be a heterodox set of ideas, with a limited blast radius. Even a major search engine up-ranking a site devoted to, like, how horse dewormers can cure covid, doesn't present it as if that link is the answer to how to cure covid, right? LLMs have a pernicious combination of sounding authoritative while speaking gibberish. Their real skill is not in surfacing the truth from a mass of data, it's in presenting a set of assertions as truth in a way that might satisfy the maximum number of people with limited curiosity, and in establishing an artificial sense of trust. That's why LLMs are likely the most demonic thing ever made by man. They are machines built to lie, tell half-truths, obfuscate and flatter at the same time. Doesn't that sound enough like every religion's warning about the devil?
But nothing has changed there. People have been posting intelligent-sounding gibberish on social media and blogs for years before LLMs.
The problem with centralisation isn’t that it gobbles up data. It’s that it allows those weights to be dictated by a small few who might choose to skew the model more favourably to the messaging they’ve want to promote.
And this is a genuine concern. But it’s also not a new problem either. We already have that problem with new broadcasters, newspaper publications, social media ethics teams, and so on and so forth.
The new problem LLMs bring to human interaction isn’t any of the issues described above. It’s with LLMs replacing human contact in situations where you need something with a conscience to step in.
For example, conversations leading to AI promoting negative thoughts from people with mental health problems because the chat history starts to overwhelm the context window, resulting in the system prompt doing a poorer job of weighting the conversation away from dangerous topics like suicide.
This isn’t to say that the points which you’ve addressed aren’t real problems that exist. They definitely do exist. But they’ve also always existed, even before GPT was invented. We’ve just never properly addressed those problems because:
either there’s no incentive to. If you are powerful enough to control the narrative then why would you use that power to turn the narrative against you?
…or there simply isn’t a good way of solving that problem. eg I might hate stupid conspiracy theories, but censoring research is a much worse alternative. So we just have to allow nutters to share their dumb ideas in the hope that enough legitimate research is published, and enough people are sensible enough to read it, that the nutters don’t have any meaningful impact on society.