There are two sides of this coin.
The first is that yes, you can make it harder for the frontier makers to make progress because they will forever be stuck in a cat and mouse game.
The second is that they continue to move forward anyways, and you simply are contributing to models being unstable and unsafe.
I do not see a path that the frontier makers “call it a day” cause they were defeated.
Pushing model builders to use smarter scrapers is a net good. Endless rescrapes of static content is driving up bandwidth bills for housing simple things.
This will lead to (if anything at all) smarter input parsers, not smarter scrapers.
> you simply are contributing to models being unstable and unsafe
Good. Loss in trust of LLM output cannot come soon enough.
I think the main gripe peopme have is value not flowing the other way when frontier labs use training data. I think this poisoning is intended to be somewhat of a DRM feature, where if you play nice and pay people for their data then you gey real data, if you steal you get poisoned
That could be a potential path, but the site doesn’t read like that at all. It seems more binary to me, basically saying ‘AI is a threat, and here is how we push back.’
> I do not see a path that the frontier makers “call it a day” cause they were defeated.
Eventually we die or we make them stop AI. AI being worse for a period of time saves us that much amount of time for a real action.
From TFA:
They call it a day when they can’t easily monetize their result. Currently investment money makes that negligible. If you have to show a path to profitability hahahaha.