> It seems like everyone is getting too worked up about AI generated text. Yes, it's bad, but bad writing has existed forever. We don't see most of the older stuff because it disappears (thankfully) into oblivion and you are left with the works of Chaucer and Shakespeare.

You're missing the point. In the past bad writing was just bad writing, and it was typically easy to detect. Now the main contribution of AI is bad writing that can masquerade as good writing, be produced in industrial-scale quantities, and flood all the channels. That's a much different thing.

IMHO the main achievement of LLMs will be to destroy. It'll consume utterly massive quantities of resources to basically undermine processes and technologies that once created a huge amount of value (e.g. using the internet for wide-scale conversation).

I mean, schools are going back to handwritten essays, for Christ's sake.

> You're missing the point. In the past bad writing was just bad writing, and it was typically easy to detect.

If AI generated text were well written, would it matter to you? Is it bad to use Grammarly?

I don't see anything inherently wrong with using AI tools to write, as long as writers take the responsibility to ensure the final result is good. Fighting against use of LLMs seems like a fool's errand at this point. Personally I've been using Google Translate for years to help with writing in German, little knowing at the time that it was using transformers under the covers. [0] I'm pretty sure my correspondents would have thanked me had they known. Same applies for text written in English by non-native speakers.

[0] https://arxiv.org/abs/1706.03762

edit: fixed typo. Just proof this is not an LLM.

> If AI generated text were well written, would it matter to you?

Yes, of course.

1) I don't want to waste my time with slop pumped out with a mindless process by someone who doesn't give a shit. That includes turning half an idea into a full essay of bullshit.

2) You have to distinguish between "good writing" and (lets call it) "smooth text construction." One of the big problems with LLMs is they can be used to generate slop that lacks many of the tells you could previously use to quickly determine that you're reading garbage. It's still garbage, just harder to spot so you waste more time.

> I don't see anything inherently wrong with using AI tools to write, as long as writers take the responsibility to ensure the final result is good.

Yeah, but what about the writers who don't? That's what I'm talking about. These tools benefit the bad actors far more than the ones who are trying to do things properly.

> Personally I've been using Google Translate for years to help with writing in German, little knowing at the time that it was using transformers under the covers. [0] I'm pretty sure my correspondents would have thanked me had they known.

Honestly, I think Google Translate is a lot harder to misuse than an LLM chatbot. These things aren't all the same.

I understand your argument, but the distinctions you are making seem really hard to uphold. Adapting to LLMs means we'll adopt new standards for quality or more likely re-emphasize old ones like assigning trust to specific authorities.

If you read something from Simon Willison, it's generally worth reading. [0] (Actually pretty great a lot of the time.) Everything else is the literary equivalent of spam calls. Maybe it's time to stop answering the phone?

[0] https://simonwillison.net/

> Adapting to LLMs means we'll adopt new standards for quality or more likely re-emphasize old ones like assigning trust to specific authorities.

I think we're in violent agreement, I just have a less sanguine attitude towards it. LLMs will "undermine processes and technologies that once created a huge amount of value" (to quote myself above). We'll adapt to that, as in life goes on, but major things will be lost.