"but I don't think I can get angry at them for producing useful content the wrong way"

What about plagiarism? If a person hacks together a blog post that is arguably useful but they plagiarized half of it from another person, is that acceptable to you? Is it only acceptable if it's mechanized?

One of the arguments against GenAI is that the output is basically plagiarized from other sources -- that is, of course, oversimplified in the case of GenAI, but hoovering up other people's content and then producing other content based on what was "learned" from that (at scale) is what it does.

The ecological impact of GenAI tools and the practices of GenAI companies (as well as the motives behind those companies) remain the same whether one uses them a lot or a little. If a person has an objection to the ethics of GenAI then they're going to wind up with a "binary take" on it. A deal with the devil is a deal with the devil: "I just dabbled with Satan a little bit" isn't really a consolation for those who are dead-set against GenAI in its current forms.

My take on GenAI is a bit more nuanced than "deal with the devil", but not a lot more. But I also respect that there are folks even more against it than I am, and I'd agree from their perspective that any use is too much.

My personal thoughts on gen AI are complicated. A lot of my public work was vacuumed up for gen AI, and I'm not benefitting from it in any real way. But for text, I think we already lost that argument. To the average person, LLMs are too useful to reject them on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots". Mind you, it pains me to write this. I just think that ship has sailed.

I think we have a better shot at making that argument for music, visual art, etc. Most of it is utilitarian and most people don't care where it comes from, but we have a cultural heritage of recognizing handmade items as more valuable than the mass-produced stuff.

I don't think that ship has sailed as far as you suggest: There are strong proponents of LLMs/GenAI, but not IMO many more than NFTs, cryptocurrencies, and other technologies that ultimately did not hit mainstream adoption.

I don't think GenAI or LLMs are going away entirely - but I'm not convinced that they are inevitable and must be adopted, either. Then again, I'm mostly a hold-out when it comes to things like self checkout, too. I'd rather wait a bit longer in line to help ensure a human has a job than rush through self-checkout if it means some poor soul is going to be out of work.

> I just think that ship has sailed.

Sadly, I agree. That's why I removed my works from the open web entirely: there is no effective way for people to protect their works from this abuse on the internet.

> To the average person, LLMs are too useful to reject them

The way LLMss are now, outside of the tech bubble the average person has no use for them.

> on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots"

This is a bizarre argument. Humans don't "train" on books, they read them. This could be for many reasons, like to learn something new or to feel an emotion. The LLM trains on the book to be able to imitate it without attribution. These activities are not comparable.