Your response to unauthenticated requests could be <h1>Hello world</h1> served from memory and your server/link will still fail under a volumetric attack, and you still get the pleasure of paying for the bandwidth.
So no, this advice has been outdated for decades.
Also you're doing some sort of victim blaming where everyone on earth has to engineer their service to withstand DoS instead of outsourcing that to someone else. Abusers outsource their attacks to everyone else's machine (decentralization ftw!), but victims can't outsource their defense because centralization goes against your ideals.
At least lament the naive infrastructure of the internet or something, sheesh.
We started with "AI crawlers are too aggressive" and you've escalated to volumetric DDoS. These aren't the same problem. OpenAI hitting your API too hard is solved by caching, not by Cloudflare deciding who gets an "agent passport."
"Victim blaming"? Can we please leave these therapy-speak terms back in the 2010s where they belong and out of technical discussions? If expecting basic caching is victim blaming, then so is expecting HTTPS, password hashing, or any technical competence whatsoever.
Your decentralization point actually proves mine: yes, attackers distribute while defenders centralize. That's why we shouldn't make centralization mandatory! Right now you can choose Cloudflare. With attestation, they become the web's border control.
The fine article makes it clear what this is really about - Cloudflare wants to be the gatekeeper for agent traffic. Agent attestation doesn't solve volumetric attacks (those need the DDoS protection they already sell, no new proposal required!) They're creating an allowlist where they decide who's "legitimate."
But sure, let's restructure the entire web's trust model because some sites can't configure a cache. That seems proportional.
OpenAI hitting your static, cached pages too hard and costing you terabytes of extra bandwidth that you have to pay for (both in bandwidth itself and data transfer fees) isn't solved by caching.
The post you're replying to points out that, at a certain scale, even caching things in-memory can cause your system to fall over when a user agent (e.g. AI scraper bots) are behaving like bad actors, ignoring robots.txt, and fetching every URL twenty times a day while completely ignoring cache headers/last modified/etc.
Your points were all valid when we were dealing with either "legitimate users", "legitimate good-faith bots", and "bad actors", but now the AI companies' need for massive amounts of up-to-the-minute content at all costs means that we have to add "legitimate bad-faith bots" to the mix.
> Agent attestation doesn't solve volumetric attacks (those need the DDoS protection they already sell, no new proposal required!) They're creating an allowlist where they decide who's "legitimate."
Agent attestation solves overzealous AI scraping which looks like a volumetric attack, because if you refuse to provide the content to the bots then the bots will leave you alone (or at least, they won't chew up your bandwidth by re-fetching the same content over and over all day).
Well, your post escalated to the broad claim that I responded to.
You didn't just disagree with AI crawler attestation: you're saying that nobody should distinguish earnest users from everything else because they should bear the cost of serving both, which necessarily entails bad traffic and incidental DoS.
Once again, services like CloudFlare exist because a cache isn't sufficient to deal with arbitrary traffic, and the scale of modern abuse is so large that only a few megacorps can provide the service that people want.