> Could anybody explain how this isn't easily circumvented by using a competitor's model?
Almost all the big hosted AI providers are publicly working on watermarking for at least media (text is more of a mixed bag); ultimately, its probably a regulatory play—the big providers expect that the combination of legitimate concerns and their own active fearmongering, combined with them demonstrating watermarking, will result in mandates for commercial AI generation services to include watermarking. This may even be part of the regulatory play to restrict availability and non-research use of open models.
Yes but isn't the cat out of the box already? Don't we have sufficiently strong local models that can be finetuned in various ways to rewrite text/alternate images and thus destroy possible watermarks.
Sure in some cases a model might do some astounding things that always shine through, but I guess the jury still out on these questions.