Isn't this a trope at this point? That AI companies are indiscriminately training on random websites?
Isn't it the case that AI models learn better and are more performant with carefully curated material, so companies do actually filter for quality input?
Isn't it also the case that the use of RLHF and other refinement techniques essentially 'cures' the models of bad input?
Isn't it also, potentially, the case that the ai-scrapers are mostly looking for content based on user queries, rather than as training data?
If the answers to the questions lean a particular way (yes to most), then isn't the solution rate-limiting incoming web-queries rather than (presumed) well-poisoning?
Is this a solution in search of a problem?
You do raise an interesting point. The poison fountains would probably be more effective if their outputs more closely resembled whatever the most popular problem spaces are at any given point.