This article feels lazy. Is the main argument in similar vain as "don't read the books that experts have written, and go figure stuff out on your own"? I'm trying to understand what is wrong with using a new data compression tool (LLMs) that we have built to understand the world around us. Even books are not always correct and we've figured out ways to live with that/correct that. It doesn't mean we should "Stop wasting time learning the craft"..
LLMs are optimized for sycophancy and “preference”. They are the ultra-processed foods of information sharing. There’s a big difference between having to synthesize what’s written in a book and having some soft LLM output slide down your gullet and into your bloodstream without you needing to even reflect on it. It’s the delivery that’s the issue, and it definitely makes people think they are smarter and more capable than they are in areas they don’t know well. “What an insightful question…”
Wikipedia was already bad, low brow people would google and read out articles uncritically but there was still some brain work involved. AI is that meets personalization.
No, I do not quite think that is what they wrote here. But what's the thought process here? It's hard for me even to understand if the first scare quote is supposed to be from someone being critical or someone responding to the critique. It seems like it could apply to both?
I am not the author, but quite curious to know what prevented comprehension here? Or I guess what made it feel lazy? I'm not saying its gonna win a Pulitzer but it is at minimum fine prose to me.
Or is the laziness here more concerning the intellectual argument at play? I offer that, but it seems you are asking us what the argument is, so I know it doesn't make sense.
I have been a fool in the past so I always like to read the thing I want to offer an opinion on, even if I got to hold my nose about it. It helps a lot in refining critique and clarifying one's own ideas even if they disagree with the material. But also YMMV!
> what prevented comprehension here?
This is an arrogant and unwarranted assumption. What's preventing your comprehension of this discussion?
The article sets up a straw man - the person who can convincingly fake being an expert without actually being one - and then demolishes it.
This doesn't resemble anything that I've experienced from LLM use in the real world. In my experience, amateur use of LLM is easily detected and exposed, and expert use is useful as a force multiplier.
I suppose the "Dunning-Kruger" accusation might apply to the first one, but I'm not convinced - the people doing that are usually very aware that they're faking their attempt at projecting expertise, and this comes across in all sorts of ways.
gp asked us what the blog is arguing, doesn't seem too unwarranted to assume they didn't comprehend? Or am I missing something?
Also, just fwiw, I really tried but I am truly having trouble comprehending what you are saying, or at least how it bears on the article? It is 8-9 short paragraphs long, can you like point to wear he demolishes the straw man? Or like what does that even mean to you? Isn't it a good thing to demolish a straw man? Given that it is fallacy?
Trying to be charitable here parsing this: I don't think Dunning-Kruger really speaks to one's ability to convince right? Doesn't it really manifest when we don't actually need to be convincing to anyone? This is the definitional thing about it really: you are precisely not aware you are "faking" it, you think you are doing really great!
Your comment feels lazy as well. It waves off the article without engaging with its core argument. The piece isn’t saying “ignore experts”. It’s questioning how we use tools like LLMs to think, not whether we should. There’s a difference between rejecting expertise and examining how new systems of knowledge mediate understanding.
>Your comment feels lazy as well. You repeated your one thought four times.
as a lazy person that's opposite of what i'd do.
edit : oh , you completely re-worded what i'm replying to. Carry on.
At least they put forward their own thoughts instead of a blind complaint
> I'm trying to understand what is wrong with using a new data compression tool (LLMs) that we have built to understand the world around us.
What's wrong with it is that many people are resistant to it. That's all there is to it.
> that we have built to understand the world around us
Pretty generous description. LLM output doesn't have any relationship with facts.