Yeah, I guess if I was him, I would just close issues silently and ban the person who created them, if possible. I don't think I could be as nice as he is.
Yeah, I guess if I was him, I would just close issues silently and ban the person who created them, if possible. I don't think I could be as nice as he is.
> Yeah, I guess if I was him, I would just close issues silently and ban the person who created them, if possible. I don't think I could be as nice as he is.
I think the shaming the use of LLMs to do stuff like this is a valuable public service.
Imagine the headline if a slop security report ends up real but the maintainer ignored it.
It’s a lose-lose situation for the maintainers
Thankfully in this case it's a curl vulnerability that doesn't use curl in the reproducer. That's a fairly safe call.
The problem is that AI can generate answers and code that look relevant and as if they were written by someone very competent. Since AI can generate a huge amount of code in a short time, it's difficult for the human brain to analyze it all and determine whether it's useful or just BS.
And the worst case is when AI generates great code with a tiny, hard-to-discover catch that takes hours to spot and understand.
True, that is in some cases a problem. Though in this case it was pretty clear cut. At least the obvious time wasters would get the treatment.