People are putting out blog posts and readmes constantly that they obviously couldn't even be bothered to read themselves, and they're making it to the top of HN routinely. Often the author had something interesting to share and the LLM has erased it and inserted so much garbage you can't tell what's real and what's not, and even among what's real, you can't tell what parts the author cares about and which parts they don't.
All I care about is content, too, but people using LLMs to blog and make readmes is routinely getting garbage content past the filters and into my eyeballs. It's especially egregious when the author put good content into the LLM and pasted the garage output at us.
Are there people out there using an LLM as a starting point but taking ownership of the words they post, taking care that what they're posting still says what they're trying to say, etc? Maybe? But we're increasingly drowning in slop.
Quality , human-made content is seldom rewarded anymore. Difficulty has gone up. The bar for quality is too high, so an alternative strategy is to use LLMs for a more lottery approach to content: produce as much LLM-assisted content as possible in the hope something goes viral. Given that it's effectivity free to produce LLM writing, eventually something will work if enough content is produced.
I cannot blame people for using software as a crutch when human-based writing has become too hard and seldom rewarded anymore unless you are super-talented, which statistically the vast majority of people are not.
To be fair, you are assuming that the input wasn't garbage to begin with. Maybe you only notice it because it is obvious. Just like someone would only notice machine translation if it is obvious.
> To be fair, you are assuming that the input wasn't garbage to begin with.
It's not an assumption. Look at this example: https://news.ycombinator.com/item?id=45591707
The author posted their input to the LLM in the comments after receiving critcism, and that input was much better than their actual post.
In this thread I'm less sure: https://news.ycombinator.com/item?id=45713835 - it DOES look like there was something interesting thrown into the LLM that then put garbage out. It's more of an informed guess than an assumption, you can tell the author did have an experience to share, but you can't really figure out what's what because of all the slop. In this case the author redid their post in response to criticism and it's still pretty bad to me, and then they kept using an LLM to post comments in the thread, I can't really tell how much non-garbage was going in.
What's really sad here is that it is all form over function. The original got the point across, didn't waste words and managed to be mostly coherent. The result, after spending a lot of time on coaxing the AI through the various rewrites (11!) was utter garbage. You'd hope that we somehow reach a stage where people realize that what you think is what matters and not how pretty the packaging is. But with middle management usually clueless we've conditioned people to having an audience that doesn't care either, they go by word count rather than by signal:noise ratio, clarity and correctness.
This whole AI thing is rapidly becoming very tiresome. But the trend seems to be to push it everywhere, regardless of merit.
The problem is the “they’re making it to the top of HN routinely” part.
That’s true, I just wanted to offer a counter perspective to the anti-AI sentiment in the blog post. I agree that the slop issue is probably more common and egregious, but it’s unhelpful to discount all AI assisted writing because of slop. The only way I see to counteract slop is to care about the reputation of the author.
And how does an author build up said reputation?