An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.
Well, until you start getting dozens of generated reports that you take your time to review just to find out that they're all plausible-looking bullshit about non-issues.
We already had that happening with other kinds of automated tooling, but at least it used to be easier to detect by quick skimming.