Considering that LLM are fairly good at finding security relevant issues - the bugs that matter - there is certainly scope for them. I would not be surprised if not checking your code with an LLM will be considered negligent in the near future. This week I was forced by circumstances to triage a messed up nginx configuration. While not the focus of my work the LLM pointed out two issues where configuration was not following best practices with security relevant implications (it also turned out an old release was used by the team and one issue was already fixed due to the feedback of a pen test assessment).

LLM seem to be really good in analyzing things. I don't not trust them to produce too much but alone the ability to take a few files and bits and pieces and ask for a response with a certain direction has been transformative to my work.

LLMs are not a silver bullet. They're just as likely to point out non-issues, as they are to hallucinate fixes for them.

Their output is only as valuable as the human using them. If they're not a security expert to begin with, they can be easily led astray and lulled into a false sense of security.

See curl, for example. Hundreds of bogus reports rejected by an expert human. One large report with valuable data, that still requires an expert human to sift through and validate.