Imagine if these “benevolent” erroneous AI bug reports were part of a coordinated effort to map how vulnerable the projects and maintainers are, not the code. Slow response, no response is a likely target for take over or exploits, and accepting code without review is an indication of ease of injecting a vulnerability.
It's interesting idea, I just wouldn't consider slow or no response as likely target, I think that's actually a good defense strategy for spam like this.
The line of thought is that a slow response makes the time windows of an eventually found vulnerability exploit longer. Thus, increasing its value.