The article strongly implies they have the (Python) source code, and that it looks LLM generated. I don't know about you, but I can usually tell LLM code from a mile away.
The article strongly implies they have the (Python) source code, and that it looks LLM generated. I don't know about you, but I can usually tell LLM code from a mile away.
That can prove only a half of that sentence, that an AI coding assistant was used for writing the exploit (a.k.a. "weaponization").
For the other half of the sentence ("discovery"), one could claim that it is true only if the identity of the attackers were discovered and evidence about their prior activities would be gathered.
Even if it is likely that today anyone who searches for bugs would also use AI agents to accelerate that, I find unacceptable in announcements like that of Google the use of careless sentences that are obviously either false or they might be true only if Google knew something else that they do not disclose.