Websites were a way to get authoritative information about a company, from that company (or another trusted source like Wikipedia). That trust is powerful, which is why we collectively spent so much time trying to educate users about the "line of death" in browsers, drawing padlock icons, chasing down impersonator sites, mitigating homoglyph attacks, etc. This all rested on the assumption that certain sites were authoritative sources of information worth seeking out.
I'm not really sure what trust means in a world where everyone relies uncritically on LLM output. Even if the information from the LLM is usually accurate, can I rely on that in some particularly important instance?
You raise a good point, and one I rarely see discussed.
I still believe it fundamentally comes down to an interface issue, but how trust gets decoupled from the interface (as you said, the padlock shown in the browser and certs to validate a website source), thats an interesting one to think about :-)
I imagine there will be the same problems as with Facebook and other large websites, that used their power to promote genocide. If you're in the mood for some horror stories:
https://erinkissane.com/meta-in-myanmar-full-series
When LLM are suddenly everywhere, who's making sure that they are not causing harm? I got the above link from Dan Luu (https://danluu.com/diseconomies-scale/) and if his text there is anything to go by, the large companies producing LLMs will have very little interest in making sure their products are not causing harm.
In some cases like the Air Canada one where the courts made them uphold a deal offered by their chatbot it'll be "accurate" information whether the company wants it to be or not!
Not not everything an LLM tells you is going to be worth going to court over if it's wrong though.