I think that the problem is that LLMs are good at making plausible-looking text and discerning if a random post is good or bad requires effort. And it's really bad when signal-to-noise ratio is low, due to slop being easier to make.
I think that the problem is that LLMs are good at making plausible-looking text and discerning if a random post is good or bad requires effort. And it's really bad when signal-to-noise ratio is low, due to slop being easier to make.