You could have a web of trust where Linux-using organizations each spend $x continuously scanning and patching their own dependencies with AI, and sending each other patches and scans.
You could have a web of trust where Linux-using organizations each spend $x continuously scanning and patching their own dependencies with AI, and sending each other patches and scans.
LLMs aren't capable of doing this, and never will be no matter what Anthropic tries tell you.
That's the same mindset some people had 3 years ago when they said AI wouldn't be capable of software development. Look where we are now.
I have unlimited access to every single frontier model, I've tested all of them, they are not good at writing software.
They are basically slot machines, sometimes you win a little bit and sometimes you win a lot but usually you just burn a ton of time and money sitting and staring at a screen (and frying your brain).
Mozilla seems to think it can.
https://blog.mozilla.org/en/privacy-security/ai-security-zer...
Ahh yes, I'm sure agents did this all autonomously without any human in the loop what so ever. They are useless without experts to handle them.
So then have the Linux-using organizations employ experts to handle them then.