Cookies shouldn't be tied to domains at all, it's a kludge. They should be tied to cryptographic keypairs (client + server). If the web server needs a cookie, it should request one (in its reply to the client's first request for a given url; the client can submit again to "reply" to this "request"). The client can decide whether it wants to hand over cookie data, and can withhold it from servers that use different or invalid keys. The client can also sign the response. This solves many different security concerns, privacy concerns, and also eliminates the dependency on specific domain names.
I just came up with that in 2 minutes, so it might not be perfect, but you can see how with a little bit of work there's much better solutions than "I check for not-evil domain in list!"
> They should be tied to cryptographic keypairs (client + server).
So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link, for as long as the cookie lives (and users don't visit the website to get the rotated key).
> If the web server needs a cookie, it should request one
This adds a round-trip, which slows down the website on slow connections.
> the client can submit again to "reply" to this "request"
This requires significantly overhauling HTTP and load-balancers. The public-suffix list exists because it's an easy workaround that didn't take a decade to specify and implement.
> So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link
This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc). You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
> This adds a round-trip, which slows down the website on slow connections.
Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default. The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested. This gives the client much more control over when it leaks users' data.
> This requires significantly overhauling HTTP and load-balancers
No change is needed. Web applications already do all of this all the time. (example: the Location: header is frequently sent by web apps in response to specific requests, to say nothing of REST and its many different request and return methods/statuses/headers).
> The public-suffix list exists because it's an easy workaround
So the engine of modern commerce is just a collection of easy hacks. Fantastic.
> This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc).
An attacker who gets the TLS private key of a website can't use it easily, because they still need to fool users' browser into connecting to a server they control as the victim domain, which brings us to:
> You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
It's not. I can think of two ways to attack the DNS. Either 1. control or MITM of the victim's authoritative DNS server or 2. poison users' DNS cache.
Control/MITM of the authoritative server is not an option for everyone (only ISPs/backbone operators), and according to Cloudflare: "DNS poisoning attacks are not easy" (https://www.cloudflare.com/learning/dns/dns-cache-poisoning/)
> Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default
Yes, although adding more data and adding a round-trip have different impacts (high-bandwidth high-latency connections exist). Lots of cookies and more round-trips is always worse than lots of cookies and a fewer round-trips.
> The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested.
Everyone hate configuring cache, so in most cases site operators will leave it to a default "send everything", and we're back to square one.
> No change is needed.
I was thinking that servers need to remember state between the initial client request and when the client sends an other request with the cookies. But on second thought that's indeed not necessary.
> So the engine of modern commerce is just a collection of easy hacks. Fantastic.
I'm afraid so
There's at least a dozen different attacks on DNS, but the main ones regarding record validation include multiple types of spoofing and MITM (at both the DNS and IP level), cache poisoning, account takeover (of either the nameserver or registrar), DoS attack, etc.
Cache poisoning is the easiest method, and contrary to whatever Cloudflare says, it's trivial. The DNS transaction number is 16-bits. All you have to do is flood the shit out of the resolver with spoofed packets and eventually one of the transaction numbers will hit, and your attack is successful. It's low-bandwidth, takes at most a couple hours, and nobody notices. This is one of the many reasons you can't just trust whatever DNS says.
The choice of what HTTP messages to cache is not always a choice, as is the case with HSTS. But it could be made one if testing of this proposal (which again, I came up with in 2 minutes) showed better results one way or another.
But all this is moot anyway cuz nobody gives a crap.