I don't think that's relevant. You can still find security issues in software nobody uses.
The question is a matter of impact because of how used the software is.
I don't think that's relevant. You can still find security issues in software nobody uses.
The question is a matter of impact because of how used the software is.
Way fewer people are going to look at obscure things, so a lower percentage of issues will likely have been found. There is less fame and fotune in spending security research time on obscure software. Most small libraries won't be covered by any bug bounty programs either for example.
You don't need other people anymore to find security issues, you can do it yourself with AI.
Even accepting the premise, is it not immediately obvious to you that folks will be spending more money and effort aiming AI at higher-impact targets? This isn’t all-or-nothing.