Yes, this is a thing. Bad writing with an interesting idea underneath it all is still interesting if it comes from a human because we have the expectation that the human will improve in how they share their ideas in the future. In other words, we see potential.

But LLMs don't have potential. You can make an LLM write a thousand articles in the next hour and it will not get one iota better at writing because of it. A person would massively improve merely from the act of writing a dozen, but 100x that effort and the LLM is no better off than when it started.

Despite every model release every 6 months being hailed as a "game changer", we can see from the fact that LLMs are just as empty and dumb as they were when GPT-2 was new half a decade ago that there really is no long term potential here. Despite more and more power, larger and hotter and more expensive data centers, it's an asymptotic return where we've already broken over the diminishing returns point.

And you know, I wouldn't care all that much--hell, might even be enthusiastically involved--if folks could just be honest with themselves that this turd sandwich of a product is not going to bring about AGI.