The author says it's too long. So let's tighten it up.
A criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills. Are some kinds of use are better than others? Andy Masley's blog says "thinking often leads to more things to think about", so we shouldn't worry about letting machines do the thinking for us — we will be able to think about other things.
My aim is not to refute all his arguments, but to highlight issues with "outsourcing thinking".
Masley writes that it's "bad to outsource your cognition when it:"
- Builds tacit knowledge you'll need in future.
- Is an expression of care for someone else.
- Is a valuable experience on its own.
- Is deceptive to fake.
- Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.
How we choose to use chatbots is about how we want our lives and society to be.
That's what he has to say. Plus some examples, which help make the message concrete. It's a useful article if edited properly.
I think that this summary is oversimplifying: The rest of the blog post elaborates on how the author and Masley has a completely different interpretation of that bullet point list. The rest of the text is not only examples; it provides elaborations of what thought processes led him to his conclusions. I found the nuancing of the two opposing interpretations, not the conclusion, the most enjoyable part of the post.
(This comment could also be shortened to “that’s oversimplifying”. I think my longer version is both more convincing and enjoyable.)
I feel like your comment is in itself a great analogy for the "beware of using LLMs in human communication" argument. LLMs are in the end statistical models that regress to the mean, so they by design flatten out our communication, much like a reductionist summary does. I care about the nuance that we lose when communicating through "LLM filters", but others dont apparently.
That makes for a tough discussion unfortunately. I see a lot of value lost by having LLMs in email clients, and I dont observe the benefit; LLMs are a net time sink because I have to rewrite its output myself anyway. Proponents seem to not see any value loss, and they do observe an efficiency gain.
I am curious to see how the free market will value LLM communication. Will the lower quality, higher quantity be a net positive for job seekers sending applications or sales teams nursing leads? The way I see it either we end up in a world where eg job matching is almost completely automated, or we find an effective enough AI spam filter and we will be effectively back to square one. I hope it will be the latter, because agents negotiating job positions is bound to create more inequality, with all jobs getting filled by applicants hiring the most expensive agent.
Either way, so much compute and human capital will go wasted.
> Proponents seem to not see any value loss, and they do observe an efficiency gain.
You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.
If you're in customer support, and have to deal with dumbasses all day long who are too stupid to read the fucking instructions. I imagine being able to type that out, and then have the AI remove profanity and not insult customers to be rather cathartic. Then, substitute "read the manual" for an actually complicated to explain thing.
> You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.
Anyone semi-literate can write down what they're feeling.
It's sometimes called "journaling".
Thinking through what they've written, why they've written it, and whether they should do anything about it is often called "processing emotions."
The AI can't do that for you. The only way it could would be by taking over your brain, but then you wouldn't be you any more.
I think using the AI to skip these activities would be very bad for the people doing it.
It took me decades to realize there was value in doing it, and my life changed drastically for the better once I did.
I don’t understand this summary - isn’t this a summary of the authors recitation of Masleys position? It’s missing the part that actually matters, the authors position and how it differs from Masley?
Yep - it honestly reads like an LLM's summary, which often miss critical nuances.
I know, especially with the bullet points.
The meat there is when not to use an LLM. The author seems to mostly agree with Masley on what's important.
I am curious if you understand how disrespectful your comment is.
It's a dismissive as a spitting in someone's face.
It actually isn’t very long. I was expecting it to be much longer after the author’s initial warning.
This here is why I always read the comments /first/ on HN