I get this take, but given the state of the world (the US anyways), I find it hard to trust anyone with any kind of profit motive. I feel like any information can’t be taken as fact, it can just be rolled into your world view and discarded if useful or not. If you need to make a decision that can’t be backed out of that has real world consequences I think/hope most people are learning to do as much due diligence as reasonable. Llms seem at this moment to be trying to give reliable information. When they’ve been fine tuned to avoid certain topics it’s obvious. This could change but I suspect it will be hard to find tune them too far in a direction without losing capability.
That said, it definitely feels as though keeping a coherent picture of what is actually happening is getting harder, which is scary.
I feel like any information can’t be taken as fact, it can just be rolled into your world view and discarded if useful or not.
The concern, I think, is that for many that “discard function” is not, “Is this information useful?”. Instead: “Does this information reinforce my existing world view?”
That feedback loop and where it leads is potentially catastrophic at societal scale.
This was happening well before LLMs, though. If anything, I have hope that LLMs might break some people out of their echo chambers if they ask things like "do vaccines cause autism?"
> I have hope that LLMs might break some people out of their echo chambers
Are LLMs "democratized" yet, though? If not, then it's just-as-likely that LLMs will be steered by their owners to reinforce an echo-chamber of their own.
For example, what if RFK Jr launched an "HHS LLM" - what then?
... nobody would take it seriously? I don't understand the question.
> I find it hard to trust anyone with any kind of profit motive.
As much as this is true, and i.e. doctors for sure can profit (here in my country they don't get any type of sponsor money AFAIK, other than having very high rates), there is still accountability.
We have built a society based on rules and laws, if someone does something that can harm you, you can follow the path to at least hold someone accountable (or, try).
The same cannot be said about LLMs.
>there is still accountability
I mean there is some if they go wildly off the rails, but in general if the doctor gives a prognosis based on a tiny amount of the total corpus of evidence they are covered. Works well if you have the common issue, but can quickly go wrong if you have the uncommon one.
Comparing anything real professionals do to the endless, unaccountable, unchangeable stream of bullyshit from AI is downright dishonest.
This is not the same scale of problem.