> I don't see why anyone would bother trying to distinguish humans from AI.

Because a hundred thousand people reading a blog post is more beneficial to the world than an AI scraper bot fetching my (unchanged) blog post a hundred thousand times just in case it's changed in the last hour.

If AI bots were well-behaved, maintained a consistent user agent, used consistent IP subnets, and respected robots.txt, I wouldn't have a problem with them. You could manage your content filtering however you want (or not at all) and that would be that. Unfortunately at the moment, AI bots do everything they can to bypass any restrictions or blocks or rate limits you put on them; they behave as though they're completely entitled to overload your servers in their quest to train their AI bots so they can make billions of dollars on the new AI craze while giving nothing back to the people whose content they're misappropriating.

I've not seen an AI scraper reading a blog post 100,000 times in an hour to see if it's changed. As far as I can tell, that's a NI hallucination. Typical fetch rates are more like 3 times per second (10k per hour) and fetch a different URL each time.

>Because a hundred thousand people reading a blog post is more beneficial to the world than an AI scraper bot fetching my (unchanged) blog post a hundred thousand times just in case it's changed in the last hour.

You have zero evidence of this actually happening (because it's not happening).