Sounds good.
Did you also test on old source code, to see if it could find the vulnerabilities that were already discovered by humans?
Sounds good.
Did you also test on old source code, to see if it could find the vulnerabilities that were already discovered by humans?
Isn’t that this from the (Anthropic) article:
“Our first step was to use Claude to find previously identified CVEs in older versions of the Firefox codebase. We were surprised that Opus 4.6 could reproduce a high percentage of these historical CVEs”
https://www.anthropic.com/news/mozilla-firefox-security
Anthropic mention that they did beforehand, and it was the good performance it had there that lead to them looking for new bugs (since they couln't be sure that it was just memorising the vulnerabilities that had already been published).
I really like this as a suggestion, but getting opensource code that isn't in the LLMs training data is a challenge.
Then, with each model having a different training epoch, you end up with no useful comparison, to decide if new models are improving the situation. I don't doubt they are, just not sure this is a way to show it.
Yes, but perhaps the impact of being trained on code on being able to find bugs in code is not so large. You could do a bunch of experiments to find out. And this would be interesting in itself.