People used LLMs to find flaws in Google software.
If you're talking about the incident described in the article, it says it was a flaw in "a popular open-source, web-based system administration tool".
Google's blog (https://cloud.google.com/blog/topics/threat-intelligence/ai-...) says Google "worked with the impacted vendor to responsibly disclose this vulnerability", so in this incident, it's not Google software.
But did they use Gemini?
> the company added that it did not believe it was its own Gemini chatbot.
-TFA
I don't know, but given how often Gemini refuses benign requests IME, I would suspect it's a complete non-starter for finding security holes.
If you're talking about the incident described in the article, it says it was a flaw in "a popular open-source, web-based system administration tool".
Google's blog (https://cloud.google.com/blog/topics/threat-intelligence/ai-...) says Google "worked with the impacted vendor to responsibly disclose this vulnerability", so in this incident, it's not Google software.
But did they use Gemini?
> the company added that it did not believe it was its own Gemini chatbot.
-TFA
I don't know, but given how often Gemini refuses benign requests IME, I would suspect it's a complete non-starter for finding security holes.