Because you need to do some pre-filtering on where to focus your attention, and you want to make sure the author put some thought into the article without having to analyze it.
Due to LLMs making the cost of publishing “thoughts” extremely low, there’s now an over-supply of content that looks decent on the surface, but in reality the author has probably spent less time on than the reader.
Are we ready so far down I to the LLM denial mindset that we consider an author spending multiple months crafting this to be "worthless" and less investment then your casual reading?
No, I believe this is a great post. It’s awesome. Even more so because it’s AI generated, as it shows what AI can do when given a lot of quality material to work with.
I’m just talking about the general topic about the usefulness of an “this is AI generated” classifier.
Don't we already have these filters in place? I only saw this because it was highly-upvoted on HN, for example - I don't read every new submission. I also read things sent by friends and family, shared by curators I trust, etc.
Of course these systems may eventually break down, but for now they seem to work.