I'd say edited, I did wonder if they used AI to find the issues in the first place but they would brag about that front and center and pivot to an AI-first security company within seconds. Then again, maybe they used AI to help them map out what happens in the code, even though it's Go code and should be pretty readable / obvious what happens.
That said, I think it's weird; the vulnerabilities seem to have been found by doing a thorough code review and comprehension, why then cut corners by passing the writeup through AI?
I don't think they would brag about it if they were found by AI, but based on their description I suspect most of this work was definitely done by LLMs, and then checked by humans.
Why do you have that belief? If some researcher used AI, they'd be singing the praises of AI from the rooftops. There'd be Show HN on how cool AI is that it can find CVEs. VCs would be flooding the dev with offers, for what reason who knows, but that's VCs.
Why would you think someone would hide the use of AI? I'm not familiar with a timeline with that behavior.
Infosec is a bit different - this industry is all about (1) expert knowledge and (2) secret sauce. You disclose a part of your secrets, like the security findings, and in exchange your reputation for expert knowledge increases. Telling the world "I automated the boring parts with LLMs" will not only get you "duh, everybody does it now" but will cast doubt on your expertise. That's why these repeated disclaimers at the beginning "we didn't use fuzzers etc., it was all manual process because we knew what to look for" etc.
I'd say edited, I did wonder if they used AI to find the issues in the first place but they would brag about that front and center and pivot to an AI-first security company within seconds. Then again, maybe they used AI to help them map out what happens in the code, even though it's Go code and should be pretty readable / obvious what happens.
That said, I think it's weird; the vulnerabilities seem to have been found by doing a thorough code review and comprehension, why then cut corners by passing the writeup through AI?
I don't think they would brag about it if they were found by AI, but based on their description I suspect most of this work was definitely done by LLMs, and then checked by humans.
I am not sure you are correct here :) As the one who found the 9 cves. I am pretty sure I a not an LLM. But these days, it is hard to know.
(No llm were used for the research)
Why do you have that belief? If some researcher used AI, they'd be singing the praises of AI from the rooftops. There'd be Show HN on how cool AI is that it can find CVEs. VCs would be flooding the dev with offers, for what reason who knows, but that's VCs.
Why would you think someone would hide the use of AI? I'm not familiar with a timeline with that behavior.
Infosec is a bit different - this industry is all about (1) expert knowledge and (2) secret sauce. You disclose a part of your secrets, like the security findings, and in exchange your reputation for expert knowledge increases. Telling the world "I automated the boring parts with LLMs" will not only get you "duh, everybody does it now" but will cast doubt on your expertise. That's why these repeated disclaimers at the beginning "we didn't use fuzzers etc., it was all manual process because we knew what to look for" etc.