It might work against people just use their Mini Mac with OpenClaw to summarize news every morning, but it certainly won't work against Google.
More centralized web ftw.
It might work against people just use their Mini Mac with OpenClaw to summarize news every morning, but it certainly won't work against Google.
More centralized web ftw.
It also probably won't work if the person actually wants your content and is checking if the thing they scraped actually makes sense or it just noise. Like, none of these are new things. Site owners send junk/fake data to webscrapers since web scraping was invented.
In my experience, Google (among others) plays nice. Just put "disallow: *" in your robots.txt, and they won't bother you again.
My current problem is OpenAI, that scans massively ignoring every limit, 426, 444 and whatever you throw at them, and botnets from East Asia, using one IP per scrap, but thousands of IPs.
> It might work against people just use their Mini Mac with OpenClaw to summarize news every morning,
Good enough for me.
> More centralized web ftw.
This ain't got anything to do with "centralized web," this kind of epistemological vandalism can't be shunned enough.