> because we typically want to know the writer of a piece. we want to know where to lay credit.

Does the average person really do care all the time? Maybe the outlet it comes from as a whole (factuality, political lean) but more rarely the exact author. Many don’t even have the critical skills for any of it and consume whatever content is chosen for them by whatever algorithm is there. We probably should care, I just don’t think a lot of us do.

For me, needing to know that something’s written by AI serves threefold purposes:

1) acknowledging that it might be slop that someone threw together with no effort (important in regards to spam)

2) acknowledging that depending on the model the factuality might be low when it comes to anything niche (though people are wrong too, often enough)

3) mentally preparing myself for AI bullshit slop language, like “It’s not X, it’s Y.”, or just choose not to engage with it (it's the same disgust reaction as when I find a PDF and realize it's just scanned images, not proper text)

In general, unless the goal is either human interaction or a somewhat rare case of wanting to read a specific blog etc., most of the time I don’t categorically care whether something was lovingly created by a human or shoved out by a half baked version of Skynet - only that it’s good enough for whatever metrics I want to evaluate it by. I’m not ashamed of it and maybe that’s why I don’t take an issue with AI generated code either, as long as it’s good enough (sometimes better than what people write, other times quite shit when the models and harnesses are bad).

In Peter Watt's Blindsight, the aliens understand language as spam, a hostile intent to waste their time, and respond by opening fire.

Reading LLM slop without warning makes me see their point of view.

I think there's useful ways to engage with LLM writing, but they are often very different than human writing.

A human writer, a good one, often has ideas that are denser than the words on the page, and close reading is rewarded by helping you unpack the many implications.

With AI writing, there's usually fewer ideas than words, and so it requires a different kind of engagement. Either the human prompter behind it didn't supply enough ideas, or they were noncommittal enough that their very indecision got baked in.

LLMs are very prone to hedging and circling around a point while not saying much of anything. Maybe it is the easiest way to respond to RLHF incentives and corporate-speak training data. Or maybe they're just intrinsically stuck on being unable to find the right next token so they just endlessly spiral around via all of the wrong ones. Either way, there's often a whole lot of cotton candy text that dissolves when you try to look at it more closely.