It destroys the previously implicit contract that the writer actually spent a decent amount of thought and time into the writing, and that the ideas expressed are theirs and original.
I don’t mind good usage of LLM assisted writing, but if the author can’t even be bothered identifying the most obvious AI tells, I use it as a proxy that the author probably but very little effort into the article.
It’s also often a horribly verbose style, where the same ideas could be presented with 20% of the prose.
It’s also ruining the entire experience on web communities (although here on HN the moderation team seems to get a hold of keeping them at bay at this point, much appreciated).
All in all, it’s objectively a net negative for the readers, and serves only the author.
I prefer original, less coherent articles that are genuine and where I know the ideas expressed are really the author’s and not the LLM’s inference.
Last but not least, I don’t think the grandparent you’re replying to was particularly hateful in the grand scheme of things.
Why would you prefer less coherent article? If article has a utility, I will read it, no matter what the source is.
The problem with AI written articles is still feeling uncertain whether there's actually any utility after reading 2000 words as you realize that it's been 90% filler so far but think maybe it will lead somewhere soon? But it doesn't and you wasted ten minutes reading glorified blog spam that was micro targeted at whatever niche you were researching.
After a while you pick up on the warning signs and just bail early without any guilt about false positives. It's really the only sustainable strategy in a world where it takes 5 seconds to absorb 5 minutes of your attention span.
For the same reason, people prefer authenticity over mass-produced, generic stuff.
But authentic writing takes a lot of effort, and nobody wants to do that anymore in 2026, so the status quo is more mass-produced, generic content, which is frustrating and (to me) a regression.