Semi-related: has the rate of published exploits picked up as if late, or is it simply the fact that there’s hype around ai as security tool (offense or defense) so it’s simply in the news more often?

Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on

I just did some analysis on this last weekend, in 2024 there were roughly 100 CVEs published every day. In April we hit approximately 200 per day.

Going backwards from 2023, the doubling interval for published CVEs was approximately 4 to 4 1/2 years. Since then it’s approximately two years.

There has definitely been a rapid uptick.

Published CVEs seems a bad metric to use for this- unless we assume that the ratio of really nasty vulns/not-too-bad vulns is consistent.

Also the question remains if more CVE laden code was produced in the first place, instead of automated detection improvements.

It's easier to find a needle in the haystack if the haystack is 50% needles.

have the AI vibe code crappy apps so the related AI vuln finder can fix them

just doubled the value and use cases of your AI solution!

Another reason published CVEs isn't a great metric is that one of the largest contributors to the number of CVEs significantly increasing in the past couple years has been that the Linux kernel now submits almost all bugs as CVEs which wasn't the case before.

I wouldn't look at the numbers. There used to be a lot of "scam" CVEs before LLMs, that weren't actual vulns. Nowadays its more popular to collect CVEs, and there is a lot of people scanning with LLMs and reporting without checking (like it was in case of cURL). These CVEs are often not verified by anyone.

There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.

Did you publish this anywhere? Would love to read more.

The rules around CVE reporting changed recently and it would be expected a lot more are accepted.

If one reads between the lines in part 1, the code in question was introduced due to AI features and the exploit was found by humans:

https://projectzero.google/2026/01/pixel-0-click-part-1.html

So AI usage increases bugs and humans have to weed them out!

[deleted]

There are reports from people who manage security bugs in OSS that there has been a big uptick in reports: initially low quality ones that were mostly bogus, but now many more legitimate ones as well.

This is pure guesswork, I am not a security researcher, but my guess would be that AI is increasing the amount of low quality exploitable attack surface available, while simultaneously providing security researchers with an accelerant for their work. Which is to say, its great if you use it well and really bad if you use it poorly.

Not low quality if it works!

Those two things have almost nothing to do with one another. Lots of low quality things work they're still low quality.

The low quality refers to the features with security holes. So no, it didn't work (in this hypothetical).

But it is low quality if it's vulnerable to exploits. And if that's the case, I wouldn't say it really "works".

only until it's ransomware'd

I've reported a few very serious issues to vendors of widely used tools in recent weeks, and it's been even more difficult than usual to get them to be acknowledged - the teams that respond are reportedly swamped.

There definitely is hype around AI as a security tool right now. Someone else pointed out that the rate of CVEs has gone up, but that doesn't tell is why.

This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.

A bit of both (it finds new things and news is hyped/blown up), and a third factor is that more people are trying to find things. The authors might have been able to do this already, because you still need to have a decent understanding to get useful work out of it and verify the results, but the shiny new toy and FOMO factors make people spend more hours on it that they'd have spent doing something else otherwise

I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well

I think AI helped researchers navigate better in the codebase, not necessarily the AI is succeeding in exploiting.

[dead]

The Mythos announcement was crazy I think "...has already found _thousands_ of severe security vulnerabilities across _all_ OSes"!

[dead]