What's the lead time between new generators and a new detection model? What about novel generators that are never made public?

I think the most likely outcome of a criminal organization doing this is that they train a public architecture model from scratch on the material that they want to reproduce, and then use without telling anyone. Would your detector prevent this attack?

There are three observations that are helpful to know about here: A: High quality, battle tested architectures are sold via an API and samples are therefore easy to retrieve at scale. B: lower quality, novel architectures are often published on GitHub and can be scaled on budget compute resources. C: Often these models perform well at classifying content generated by architectures similar to those they were trained on, even if that architecture is not identical.

As for actual lead time associated with our actual strategy, that’s probably not something I can talk about publicly. I can say I’m working on making it happen faster.

I don't want to be rude is this not a question you get asked by potential customers? Is that your answer for them? It sounds a lot like 'I guess we will find out.'

I think you misunderstood my answer. There is established research showing that deepfake detection models transfer learning within architectures.