at its foundation, the bots issue is in fact 3 main issues:
bots vs humans:
humans are trying to buy tickets that were sold out to a bot
data scrapping:
you index my data (real estate listing) to not to route traffic to my site as people search for my product, as a search engine will do, rather to become my competitor.
spam (and scam): digital pollution, or even worse, trying to input credit card, gift cards, passwords, etc.
(obviously there are more, most which will fall into those categories, but those are the main ones)
now, in the human assisted AI, the first issue is no longer an issue, since it is obvious that each of us, the internet users, will soon have an agent built into our browser. so we will all have the speedy automated select, click and checkout at our disposal.
Prior to LLM era, there were search engines and academic research on the right side of the internet bots, and scrappers and north to that, on the wrong side of the map. but now we have legitimate human users extending their interaction with an LLM agent, and on top of it, we have new AI companies, larger and smaller which thrive for data in order to train their models.
Cloudflare simply trying to make sense of this, whilst maintaining their bot protection relevant.
I do not appreciate the post content whatsoever, since it lacks or consistency and maturity (a true understanding of how the internet works, rather than a naive one).
when you talk about "the internet", what exactly are you referring to? a blog? a bank account management app? a retail website? social media?
those are all part of the internet and each is a complete different type of operation.
EDIT:
I've written a few words about this back in January [1] and in fact suggested something similar:
Leading CDNs, CAPTCHA providers, and AI vendors—think
Cloudflare, Google reCAPTCHA, OpenAI, or Anthropic
could collaborate to develop something akin to a
“tokenized machine ID.”
https://blog.tarab.ai/p/bot-management-reimagined-in-the