I’m with you until that last sentence, which I’ve been thinking about as “… until AI code testing, vulnerability scanning, and developer support tools help to limit the number of 0-days and vulnerabilities making it into production”.
So prevention will be more important than ai-assisted rapid containment or patching, though both of those capabilities will be necessary as part of defense in depth.
And some sort of AI-enabled security analysis across the organization’s architecture that is done as part of testing ahead of new software entering production to ID potential vulnerabilities caused by configuration changes or upgrades that modify how systems interact with each other.
I’ve been trying to guess the timeframe for seeing improved secure development, but I’m hoping it’s a bit closer to 6 months - 1 year given the speed of AI adoption and AI progression. May be closer to 3 years as you stated.
In the meantime, is there more to be done than this (not in order)?
- Patch COTS software
- re-evaluate the scoring for previous vulnerabilities
- set up up containment measures capabilities for systems that can’t be patched / high risk vendors
- use frontier model vuln scanning and patching for home grown systems that may have more 0-days than COTS depending on the organization’s capability
- limit the number of vendors / simplifying the tech stack.
I’d be happy to hear how others are thinking about this.
we simply can't absolve ourselves of responsibility in input and expect a hardened output. It's ABSOLUTELY up to the engineers to have test harnesses and scenarios for testing, vulnerability scanning, etc. Just because we can move faster via prompts doesn't mean we neglect the SDLC.
I think there's opportunity to reinvent the pipeline with AI powered tools to assist but the onus is still on the person to ensure they are deploying something that has been tested.