It’s just a stochastic parrot! Somehow all these vulnerabilities were in the training data! Nothing ever happens!

(/s if it’s not clear)

What an irritating comment. Identifying bugs in code is, in fact, exactly something a stochastic parrot could do. Vulnerability research is already a massively automated industry, and there's even a very well-established term -- "script kiddies" -- for malicious teenagers who run scripts that automatically find vulnerabilities in existing services without any knowledge of how they work. Having a new form of automation can certainly be a useful tool, but is still in no way an indication of "intelligence" or any deviation from the expected programming of next token prediction guided by statistical probability.

Thank you very much for acting as a useful foil and proving my point.

You didn't make a point, and still haven't. You screeched a bunch of buzzphrases sarcastically as if that were equivalent to making a point, which is about par for the course for the level of reasoning (ie. none) shown by people with the position you hold. You seem to take it for granted that just by asserting that LLMs aren't next-token-prediction-programs, that must be factually true, without making any kind of argument or reasoning for why that is the case. Of course, any attempt to reason at that position falls apart under trivial scrutiny, so it's no wonder you're averse to reasoning about it and settle for trite assertions.