LLMs are optimized for sycophancy and “preference”. They are the ultra-processed foods of information sharing. There’s a big difference between having to synthesize what’s written in a book and having some soft LLM output slide down your gullet and into your bloodstream without you needing to even reflect on it. It’s the delivery that’s the issue, and it definitely makes people think they are smarter and more capable than they are in areas they don’t know well. “What an insightful question…”

Wikipedia was already bad, low brow people would google and read out articles uncritically but there was still some brain work involved. AI is that meets personalization.