LLMs are not a silver bullet. They're just as likely to point out non-issues, as they are to hallucinate fixes for them.
Their output is only as valuable as the human using them. If they're not a security expert to begin with, they can be easily led astray and lulled into a false sense of security.
See curl, for example. Hundreds of bogus reports rejected by an expert human. One large report with valuable data, that still requires an expert human to sift through and validate.