AI finding vulnerabilities in open source software is going to make it super unpleasant for a time. I expect there to be a shift back to closed source until we get through that period.

Is there any evidence that GenAI is incapable of redteam'ing proprietary software? This seems like the sort of thing an agent with suitable tooling would be quite good at - I see someone already made an MCP for ghidra...

I suspect that the same AI trick works with binaries if you run them through a decompiler first. It would be interesting to try if I had time.

That's also a benefit to some degree. Closed source likely has as many vulnerabilities and bugs, but if AI can't find them it'll progressively become less secure.

Fair. But also I look at it as a chance. We get to fix lots of bugs. Bugs that bad actors can't use anymore.

A million eyes makes no difference when it comes to AI, they're all going to find the same vulnerabilities. Which means that one guy running AI against your closed source software is just about the same as 1000 guys running AI against your FOSS, but most of the people running against your FOSS are going to be doing it to help you, and the people who ran against your closed codebase are never going to tell you about it.

AI finding vulnerabilities and cleaning them up is going to be a budget problem for closed-source software, who have gotten used to ignoring vulnerabilities until somebody screams at them.

Closed source software isn't kept in a magical safe in a cavern deep beneath the earth, guarded by dragons. Half the people in your company touch it every day, and probably plenty of contractors.