This is largely reactionary and false on both counts.
AI has been good for years now. Good doesn't mean perfect. it doesn't mean flawless. It doesn't mean the hype is spot-on. good means exactly that, it is good at what is intended to do.
It is not destroying open source either. If anything, there would be more open source contributors using AI to create code.
You can call anything done by AI "slop" but that doesn't make it so.
Daniel and the curl project were also over reacting. A reaction was warranted, but there were many measures they could have taken before shutting down bug reporting entirely.
If you replace "AI" with "junior dev", "troll" , "spammer", what would things be like then? If it is scale, you can troll, spam and be incompetent at scale just fine without the help of AI.
It's gatekeeping and sentimentality amplified.
I can't wait for people who call everything slop to be overshadowed by people who are so used to LLMs that their usage isn't different than using a linter, a compiler, an IDE, just another tool good at certain tasks but not others. abusable, but with reasonable mitigations possible.
I keep reading posts about what open source users are owed and not owed. Github restricting PRs, developers complaining about burnouts. Have you considered using AI "slop" instead? give a slop response to what you consider to be a slop request? Oh, but no, you could never touch "AI", that would stain you! (I speak to the over-reactors). You don't need AI, you could do anything AI can do (except AI doesn't complain about it all the time, or demand clout).
What is the largest bottleneck and hinderance to open source adaption? Money? No, many, including myself are willing to spend for it. I've even lucked out trying to pay an open source project maintainer to support their software. It's always support.
Support means triaging bugs, and feature requests in a timely manner. You know what helps with that a lot? A tool that understand code generation and troubleshooting well, along with natural language processing. A bot that can read what people are requesting, and give them feedback until their reports meet a certain criteria of acceptability, so you as a developer don't have to deal with the tiring back and forth with them. that same tool can generate code in feature branches. fix people's PR's so it meets your standards and priorities. highlight changes and how they affect your branch, prioritize them for you, so you can spend minimal time reviewing code and accepting or rejecting PRs.
If that isn't good for open source then what is?
Bad attitude towards AI is destroying open source projects led by people entrenched in an all-or-nothing false dichotomy mindset against AI. And AI itself is good. not great, not replace-humans-great, but good enough for it's intended use. great with cooperative humans in the decision making loop.
Use the best tool for the task!
that should be like #2 in the developer rule book, with #1 being:
It needs to work.
> Good doesn't mean perfect. it doesn't mean flawless.
"Good" means "I can trust it to give me code that is at least as good as what a moderately skilled human would produce". They still aren't there, even after years of development. They still regularly give you code that doesn't follow the correct logic, or which isn't even syntactically valid. They are not good, or even remotely good.
That's just your expectation. if it can do as much as the least competent human, that's already a huge deal. You're expecting it to think for you instead of assist you.
You know what it is capable of, use it accordingly. it saves lots of time in troubleshooting, and generating starter code. in some cases, it can generate full featured complete production apps that people are using without major issues on its own.
Even with your example, you have to fix syntax and errors here and there, instead of writing it from scratch. Which approach takes more time, that depends on the model, the code and you. like the author, your measuring stuck is humans for some reason.
You know it's not really "AI" right, that's just a marketing term. there is no intelligence involved. it's auto completion. your argument is like saying IDE auto completion isn't always great so it should never be used.