All this stuff is snake oil, either already, or eventually.
There's new models showing up regularly. Civitai recognizes 33 image models at this point, and audio will also see multiple developments. Any successful attack on a model isn't guaranteed to apply to another one, not even yet invented. There's also a multitude of possible pre-processing methods and their combinations for any piece of media.
There's also the difficulty of attacking a system that's not well documented. Not every model out there is open source and available for deep analysis.
And it's hard to attack something that doesn't yet exist, which means countermeasures will come up only after a model was already successfully created. This is I'm sure of some academic interest, but the practical benefits seem approximately none.
Since information is trivially stored, anyone having any trouble could just download the file today and sit on it for a year or two not doing anything at all, just waiting for a new model to show up.
To the extent that the people making the models feel unburdened by the data being explicitly watermarked "don't use me", you are correct.
Seems like an awful risk to deliberately strip such markings. It's a kind of DRM, and breaking DRM is illegal in many countries.
But it's not intended as a watermark, it's an attempt at disruption. And with some models it simply doesn't work.
For instance, I've seen somebody experiment with Glaze (the image AI version of this). Glaze at high levels produces visible artifacts (see middle image: https://pbs.twimg.com/media/FrbJ9ZTacAAWQQn.jpg:large ).
It seems some models ignore it and produce mostly clean images on the output (looking like the last image), while others just interpret is as a texture, the character is just wearing a funny patterned shirt. This is while the intended result is fooling the model to generate something other than the intended character.
> It seems some models ignore it and produce mostly clean images on the output (looking like the last image), while others just interpret is as a texture
This sounds like you’re talking about img2img generation based on a glazed image instead of training, which isn’t the intended purpose.
No, I'm not talking about img2img. There are people training LoRAs on these. There's been multiple experiments, so far I've seen no evidence of it working as intended.
Here's an example I found: https://www.reddit.com/r/aiwars/comments/1h1x4e2/a_very_deta...
You can see there an example of training picking up the glaze artifacts and just using them as a funky texture. That's not really what Glaze is intended to do. Glaze is supposed to interfere with training, not be interpreted as "this artist draws skin with a weird pattern on it".