I don't like binary takes on this. I think the best question to ask is whether you own the output of your editing process. Why does this article exist? Does it represent your unique perspective? Is this you at your best, trying to share your insights with the world?

If yes, there's probably value in putting it out. I don't care if you used paper and ink, a text editor, a spell checker, or asked an LLM for help.

On the flip side, if anyone could've asked an LLM for the exact same text, and if you're outsourcing a critical thinking to the reader - then yeah, I think you deserve scorn. It's no different from content-farmed SEO spam.

Mind you, I'm what you'd call an old-school content creator. It would be an understatement to say I'm conflicted about gen AI. But I also feel that this is the most principled way to make demands of others: I have no problem getting angry at people for wasting my time or polluting the internet, but I don't think I can get angry at them for producing useful content the wrong way.

Exactly. If it's substantially the writer's own thoughts and/or words, who cares if they collaborated with an LLM, or autocomplete, or a spelling/grammar-checker, or a friend, or a coworker, or someone from Fiverr? This is just looking for arbitrary reasons to be upset.

If it's not substantially their own writing or ideas, then sure, they shouldn't pass it off as such and claim individual authorship. That's a different issue entirely. However, if someone just wanted to share, "I'm 50 prompts deep exploring this niche topic with GPT-5 and learned something interesting; quoted below is a response with sources that I've fact-checked against" or "I posted on /r/AskHistorians and received this fascinating response from /u/jerryseinfeld", I could respect that.

In any case, if someone is posting low-quality content, blame the author, not the tools they happened to use. OOP may as well say they only want to read blog posts written with vim and emacs users should stay off the internet.

I just don't see the point in gatekeeping. If someone has something valuable to share, they should feel free to use whatever resources they have available to maximize the value provided. If using AI makes the difference between a rambling draft riddled with grammatical and factual errors, and a more readable and information-dense post at half the length with fewer inaccuracies, use AI.

In my experience if the ai voice was immediately noticeable the writing provided nothing new and most of the time is actively wrong or trying to make itself seem important and sell me on something the owner has a stake in.

Not sure if this is true for other people but it's basically always a sign of something I end up wishing I hadn't wasted my time reading.

It isn't inherently bad by any means but it turns out it's a useful quality metric in my personal experience.

That was essentially my takeaway. The problem isn't when AI was used. It's when readers can accurately deduce that AI was used. When someone uses AI skillfully, you'll never know unless they tell you.

i feel like i've seen this comparison made before, but LLMs, when used, are best applied like autotune. 99% of vocal recordings released on major (and even indie) labels have some degree of autotune applied. when done correctly, you can't tell (unless you're a grizzled engineer who can hear 1dB of compression or slight EQ changes). it's only when it's cranked up or used lazily that it can detract from the overall product.

"but I don't think I can get angry at them for producing useful content the wrong way"

What about plagiarism? If a person hacks together a blog post that is arguably useful but they plagiarized half of it from another person, is that acceptable to you? Is it only acceptable if it's mechanized?

One of the arguments against GenAI is that the output is basically plagiarized from other sources -- that is, of course, oversimplified in the case of GenAI, but hoovering up other people's content and then producing other content based on what was "learned" from that (at scale) is what it does.

The ecological impact of GenAI tools and the practices of GenAI companies (as well as the motives behind those companies) remain the same whether one uses them a lot or a little. If a person has an objection to the ethics of GenAI then they're going to wind up with a "binary take" on it. A deal with the devil is a deal with the devil: "I just dabbled with Satan a little bit" isn't really a consolation for those who are dead-set against GenAI in its current forms.

My take on GenAI is a bit more nuanced than "deal with the devil", but not a lot more. But I also respect that there are folks even more against it than I am, and I'd agree from their perspective that any use is too much.

My personal thoughts on gen AI are complicated. A lot of my public work was vacuumed up for gen AI, and I'm not benefitting from it in any real way. But for text, I think we already lost that argument. To the average person, LLMs are too useful to reject them on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots". Mind you, it pains me to write this. I just think that ship has sailed.

I think we have a better shot at making that argument for music, visual art, etc. Most of it is utilitarian and most people don't care where it comes from, but we have a cultural heritage of recognizing handmade items as more valuable than the mass-produced stuff.

I don't think that ship has sailed as far as you suggest: There are strong proponents of LLMs/GenAI, but not IMO many more than NFTs, cryptocurrencies, and other technologies that ultimately did not hit mainstream adoption.

I don't think GenAI or LLMs are going away entirely - but I'm not convinced that they are inevitable and must be adopted, either. Then again, I'm mostly a hold-out when it comes to things like self checkout, too. I'd rather wait a bit longer in line to help ensure a human has a job than rush through self-checkout if it means some poor soul is going to be out of work.

> I just think that ship has sailed.

Sadly, I agree. That's why I removed my works from the open web entirely: there is no effective way for people to protect their works from this abuse on the internet.

> To the average person, LLMs are too useful to reject them

The way LLMss are now, outside of the tech bubble the average person has no use for them.

> on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots"

This is a bizarre argument. Humans don't "train" on books, they read them. This could be for many reasons, like to learn something new or to feel an emotion. The LLM trains on the book to be able to imitate it without attribution. These activities are not comparable.

I feel like plagiarism is an appropriate analogy. Student can always argue they still learn something out of it and yada yada, and there's probably some truth in it. However, we still principally reject it in a pretty binary manner. I believe the same reason applies to LLM artifacts too, or at least spiritually.