> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

Google, Cloudflare, and Microsoft are a trio of companies that get to see most of what's going on the internet. I imagine that if they see you attacking them, they can work back from that and get remarkably far, even against sophisticated actors. If it's their LLM, they presumably keep transcripts. If you searched for the affected API function via a search engine, they almost certainly know. Even if you used a competing search product, you probably went to a site that has Google Analytics. Oh, and one of these companies probably has your DNS lookups. And a good chunk of the world's email traffic. And telemetry from your workstation. And auto-uploaded crash reports... And if it's bad, they can work together behind the scenes to get to the bottom of it.

So, when their threat intel orgs say they have high confidence in something, I'd be inclined to believe it.

None of what you've said is untrue. And if this was an internal report to an executive, I'd agree with it. But this is a public statement and I'm more inclined to believe that this is part of a coordinated run up to a move to ban the import of 'dangerous' Chinese AI models -- or something else equally self serving -- than a simple statement of truth.

I don't doubt that they found some evidence of AI use. I'm just skeptical that the amount and strength of evidence has anything to do with their making this statement.

I've been thinking about why the AI companies are making so much use of fear based marketing. And I'm wonder if it isn't just naked Machiavellianism at work.

For a long time tech companies were forced to compete for power by being the most loved (or at least not the most hated). But now they've found an avenue to cultivate fear.