> To be fair, you are assuming that the input wasn't garbage to begin with.
It's not an assumption. Look at this example: https://news.ycombinator.com/item?id=45591707
The author posted their input to the LLM in the comments after receiving critcism, and that input was much better than their actual post.
In this thread I'm less sure: https://news.ycombinator.com/item?id=45713835 - it DOES look like there was something interesting thrown into the LLM that then put garbage out. It's more of an informed guess than an assumption, you can tell the author did have an experience to share, but you can't really figure out what's what because of all the slop. In this case the author redid their post in response to criticism and it's still pretty bad to me, and then they kept using an LLM to post comments in the thread, I can't really tell how much non-garbage was going in.
What's really sad here is that it is all form over function. The original got the point across, didn't waste words and managed to be mostly coherent. The result, after spending a lot of time on coaxing the AI through the various rewrites (11!) was utter garbage. You'd hope that we somehow reach a stage where people realize that what you think is what matters and not how pretty the packaging is. But with middle management usually clueless we've conditioned people to having an audience that doesn't care either, they go by word count rather than by signal:noise ratio, clarity and correctness.
This whole AI thing is rapidly becoming very tiresome. But the trend seems to be to push it everywhere, regardless of merit.