>any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.

Shouldn't this naturally lead to a state where all (new) code is vulnerability-free? If AI vulnerability detection friction becomes low enough it'll become common/forced practice to pre-scan code.

Finding a vulnerability by looking at the diff that fixed it is very different than just looking through the code.

They're saying to do that scan to every diff before release, to see if it finds anything.

I believe their point was that:

"How likely is this diff a patch for an existing vulnerability?"

Seems to be an easier question to answer than

"Are there any new vulnerabilities introduced by this diff?"

In other words identifying that a patch is for a vulnerability is typically easier than finding the vulnerability in the first place.

If the diff will just be fed to LLMs regardless then what is easier is probably a moot point.

The point is that even if all code commits are scanned as safe by ai, black hats can still analyse the commits and diffs to find vulnerabilites for people who havent patched yet.

Scanning every commit doesnt automatically make everyone in the world patch immediately, vulns can still be found from commits and diffs and used against those who havent patched yet.

The diff yields the patched code which is used to produce the exploit.

> it'll become common/forced practice to pre-scan code.

You'd think.

But then you'd think people would do a lot of other things too. I hope, I guess.

The other danger is that "the cloud" may become even more overwhelmingly dominant. Which of course has its own large security costs.

Remeber (to you both) extrapolation is a perilous business.

Obligatory xkcd https://xkcd.com/605/