To me that's the opposite. Whatever an LLM gives me, I view with skepticism. If I google sth then I quickly get a sense of how much I can trust it and what the BS factor is. I can refine my view in either case, but my a priori trust with an LLM is much lower.
Maybe we just need to work on training the general population to have a similar bias. (It will be harder than it sounds. Unbelievable amounts of capital are being bet on this not happening.)
In a discussion with my father-in-law about whether ChatGPT was trained on copyrighted materials, he literally asked ChatGPT and treated its response that it wasn't as useful evidence. He went to MIT, so he's arguably more educated than most people will ever be, so it's hard for me to be optimistic that trying to just explain this to people better will move the needle significantly.
Yes, it's the same for me, but we're not representative of most people I'm afraid.