From Google's GTIG report: https://cloud.google.com/blog/topics/threat-intelligence/ai-...
"Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability. For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class) "
This only indicates that an AI coding agent was used to write an exploit.
No such circumstantial evidence can prove that an AI model has been used to find the bug.
Of course, it is quite likely that an AI model was used to speed up the search for bugs, but this can never be proven as long as you see only the code used to exploit the bug.
[dead]
That's evidence the script was written by an AI, but not necessarily that the exploit was found by it.
I think it would be rather worth reporting these days if hackers totally handcrafted all code without any use of any AI.
The post reads like Ai wrote it - from that I can deduce that all strategy at google has been generated by Ai.