I think what all theses kinds of comments miss is that AI can be help people to express their own ideas.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
I’d much rather read a letter from you full of errors than some smooth average-of-all-writers prose. To be human is to struggle. I see no reason to read anything from anyone if they didn’t actually write it.
If I spend hours writing and rewriting a paragraph into something I love while using AI to iterate, did I write that paragraph?
edit: Also, I think maybe you don't appreciate the people who struggle to write well. They are not proud of the mistakes in their writing.
> did I write that paragraph?
No. My kid wrote a note to me chock full of spelling and grammar mistakes. That has more emotional impact than if he'd spent the same amount of time running it through an AI. It doesn't matter how much time you spent on it really, it will never really be your voice if you're filtering it through a stochastic text generation algorithm.
What about when someone who can barely type (like stephen hawking used to, 3 minutes per sentence using his cheek) uses autocomplete to reduce the unbelievable effort required to type out sentences? That person could pick the auto completed sentence that is closest to what they’re trying to communicate, and such a thing can be a life saver.
You may as well ask for a person that can walk to be able to compete in a marathon using a car.
I’m all for using technology for accessibility. But this kind of whataboutism is pure nonsense.
The intention isn’t whataboutism, it’s about where do you draw the line? And your example betrays you…
Forgive a sharp example, but consider someone who is disabled and cannot write or speak well. If they send a loving letter to a family member using an LLM to help form words and sentences they otherwise could not, do you really think the recipient feels cheated by the LLM? Would you seriously accuse them of not having written that letter?
If you buy a hallmark greetings card and send that to someone with your signature on it, did you write the whole card?
Your arguments are verging on the obtuse.
Read the article again. Rob Pike got a letter from a machine saying it is "deeply grateful". There's no human there expressing anything, worse, it's a machine gaslighting the recipient.
If a family member used LLM to write a letter to another, then at least the recipient can believe the sender feels the gratefulness in his/her human soul. If they used LLM to write a message in their own language, they would've proofread it to see if they agree with the sentiment, and "take ownership" of the message. If they used LLM to write a message in a foreign language, there's a sender there with a feeling, and a trust of the technology to translate the message to a language they don't know in the hopes that the technology does it correctly.
If it turns out the sender just told a machine to send their friends each a copy-pasted message, the sender is a lazy shallow asshole, but there's still in their heart an attempt of brightening someone's day, however lazily executed...
I think maybe you missed that my response was to this comment:
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
I already said in other comments that the OP was a different situation.
I think you created it the same way christian von koenigsegg makes supercars. You didn’t hand make each panel, or hand design the exact aerodynamics of the wing, an engineer with a computer algorithm did that. But you made it happen, and that’s still cool
It is not about being proud, it is about being sincere.
If you send me a photo of the moon supposedly taken with your smartphone but enhanced by the photo app to show all the details of the moon, I know you aren't sincere and sending me random slop. Same if you are sending me words you cannot articulate.
> These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas
The writing is the ideas. You cannot be full of yourself enough to think you can write a two second prompt and get back "Your idea" in a more fleshed out form. Your idea was to have someone/something else do it for you.
There are contexts where that's fine, and you list some of them, but they are not as broad as you imply.
As the saying goes, "If I'd had more time, I would have written a shorter letter". Of course AI can be used to lazily stretch a short prompt into a long output, but I don't see any implication of that in the parent comment.
If someone isn't a good writer, or isn't a native speaker, using AI to compress a poorly written wall of text may well produce a better result while remaining substantially the prompter's own ideas. For those with certain disabilities or conditions, having AI distill a verbal stream of consciousness into a textual output could even be the only practical way for them to "write" at all.
We should all be more understanding, and not assume that only people with certain cognitive and/or physical capabilities can have something valuable to say. If AI can help someone articulate a fresh perspective or disseminate knowledge that would otherwise have been lost and forgotten, I'm all for it.
> For those with certain disabilities or conditions, having AI distill a verbal stream of consciousness into a textual output could even be the only practical way for them to "write" at all.
These are the exact kinds of cases I think are ok, but let's not pretend even 10% of the AI writing out there fits this category
This feels like the essential divide to me. I see this often with junior developers.
You can use AI to write a lot of your code, and as a side effect you might start losing your ability to code. You can also use it to learn new languages, concepts, programming patterns, etc and become a much better developer faster than ever before.
Personally, I'm extremely jealous of how easy it is to learn today with LLMs. So much of the effort I spent learning the things could be done much faster now.
If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.
This is pretty far off from the original thread though. I appreciate your less abrasive response.
> If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.
While this seem like it might be the case, those hours you (or we) spent banging our collective heads against the wall were developing skills in determination and mental toughness, while priming your mind for more learning.
Modern research all shows that the difficulty of a task directly correlates to how well you retain information about that task. Spaced repetition learning shows, that we can't just blast our brains with information, and there needs to be
While LLMs do clearly increase our learning velocity (if using it right), there is a hidden cost to removing that friction. The struggle and the challenge of the process built your mind and character in ways that you cant quantify, but after years of maintaining this approach has essentially made you who you are. You have become implicitly OK with grinding out a simple task without a quick solution, the building of that grit is irreplaceable.
I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?
Totally agree, but also, I still spend tons of time struggling and working on things with LLMs, it is just a different kind of struggle, and I do think I am getting much better at it over time.
> I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?
Strong agree here.
> If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time
But this is the learning process! I guess time will tell whether we can really do without it, but to me these long struggles seem essential to building deep understanding.
(Or maybe we will just stop understanding many things deeply...)
Yeah it can be a risk or a benefit for sure.
I agree that struggle matters. I don’t think deep understanding comes without effort.
My point isn’t that those hours were wasted, it’s that the same learning can often happen with fewer dead ends. LLMs don’t remove iteration, they compress it. You still read, think, debug, and get things wrong, just with faster feedback.
Maybe time will prove otherwise, but in practice I have found they let me learn more, not less, in the same amount of time.
That is not what is happening here. There is no human the loop, it's just automated spam.
good point. My response was to the comment not the OP
Well your examples are things that were possible before LLMs.
This is disingenuous
What beautiful things? It just comes across as immoral and lazy to me. How beautiful.
> People are capable of seeing which is which.
I would hazard a guess that this is the crux of the argument. Copying something I wrote in a child comment:
> When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
> I agree just telling an AI 'write my thank you letter for me' is pretty shitty
Glad we agree on this. But on the reader's end, how do you tell the difference? And I don't mean this as a rhetorical question. Do you use the LLM in ways that e.g. retains your voice or makes clear which aspects of the writing are originally your own? If so, how?
I hear you. and I think AI has some good uses esp. assisting with challenges like you mentioned. I think whats happening is that these companies are developing this stuff without transparency on how its being used, there is zero accountability, and they are forcing some of these tech into our lives with out giving us a choice.
So Im sorry but much of it is being abused and the parts of it being abused needs to stop.
I agree about the abuse, and the OP is probably a good example of that. Do you have any ideas on how to curtail abuse?
Ideas I often hear usually assume it is easy to discern AI content from human, which is wrong, especially at scale. Either that, or they involve some form of extreme censorship.
Microtransactions might work by making it expensive run bots while costing human users very little. I'm not sure this is practical either though, and has plenty of downsides as well.
I don't see this changing without a complete shift in our priorities on the level of politics and business. Enforcing Anti-trust legislation and dealing with Citizens United. Corporations don't have free speech. Free speech and other rights like these are limited to living, breathing humans.
Corporations operate by charters, granted by society to operate in a limited fashion, for the betterment of society. If that's not happening, corporations don't have a right to exist.
I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.
You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.
(As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)
Do you feel the same about spellcheck?
Does Spellcheck take a full sentence and spit out paragraphs of stuff I didn't write?
I mean how do you write this seriously?
But in the end a human takes the finished work and says yes, this matches what I intended to communicate. That is what is important.
That's neither what happens nor what is important.
> I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.
Photographers use cameras. Does that mean it isn't their art? Painters use paintbrushes. It might not be the the same things as writing with a pen and paper by candlelight, but I would argue that we can produce much more high quality writing than ever before collaborating with AI.
> As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.
This is not fair. There is certainly a lot of danger there. I don't know what it's like to have dimentia, but I have seen mentally ill people become incredibly isolated. Rather than pretending we can make this go away by saying "well people should care more", maybe we can accept that a new technology might reduce that pain somewhat. I don't know that today's AI is there, but I think RLHF could develop LLMs that might help reassure and protect sick people.
I know we're using some emotional arguments here and it can get heated, but it is weird to me that so many on hackernews default to these strongly negative positions on new technology. I saw the same thing with cryptocurrency. Your arguments read as designed to inflame rather than thoughtful.
I guess your point is that a camera, a paintbrush, and an LLM are all tools, and as long as the user is involved in the making, then it is still their art? If so, then I think there are two useful distinctions to make:
1. The extent to which the user is involved in the final product differs greatly with these three tools. To me there is a spectrum with "painting" and e.g. "hand-written note" at one extreme, and "Hallmark card with preprinted text" on the other. LLM-written email is much closer to "Hallmark card."
2. Perhaps more importantly, when I see a photograph, I know what aspects were created by the camera, so I won't feel mislead (unless they edit it to look like a painting and then let me believe that they painted it). When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
I think you are right that it is a spectrum, and maybe that's enough to settle the debate. It is more about how you use it than the tool itself.
Maybe one more useful consideration for LLMs. If a friend writes to me with an LLM and discovers a new writing pattern, or learns a new concept and incorporates that into their writing, I see this as a positive development, not negative.
But what about the second point?
I would be very surprised if no interesting art could be made with LLMs. But, like a camera, it produces a distinct kind of art to other tools. We do not say that a camera produces a painting. Instead photography is its own medium with its own forms and techniques and strengths and weaknesses.
Using photography to claim that obviously all good writing will be LLM replacements for current writing is... odd.
Neither a camera nor a paintbrush generates art? They still require manual human input for everything, and offer no creative capacity on their own.
A photograph is an expression of the photographer, who chooses the subject, its framing, filters, etc. Ditto a painting.
LLM output is inherently an expression of the work of other people (irrespective of what training data, weights, prompts it is fed). Essentially by using one you're co-authoring with other (heretofore uncredited) collaborators.
I think that the fact that people don't understand why there are so many negative positions is equally frustrating. To me it seems blatantly obvious that the majority of LLM usage by people today is coming from models that are trained on stolen data without following any of the requirements or licenses of the authors.
With Rob Pike being such a prolific figure in software development, it's likely that a sizable portion of what makes the LLM function and be able to send him that email was possible only because they didn't uphold their end of the bargain. I don't see why anyone has trouble comprehending why this would make him furious?
I know for me personally, I'm happy to share things I've made but make no mistake, I would never share it if other users of it did not credit me, specifically by following the terms in the license I've published. The fact that LLMs have ingested and used so much software yet I can't find the licenses text provided by the training data authors is at minimum deeply distributing and at most actively harmful. For works licensed under something like the GPL where someone is only ok for their software to be used under strict terms, I don't even know where to start with how upset I imagine they would be.
Why is this weird? If anything I feel it would be the default response from someone on here.