In the past, browsers used an algorithm which only denied setting wide-ranging cookies for top-level domains with no dots (e.g. com or org). However, this did not work for top-level domains where only third-level registrations are allowed (e.g. co.uk). In these cases, websites could set a cookie for .co.uk which would be passed onto every website registered under co.uk.
Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain (the policies differ with each registry), the only method is to create a list. This is the aim of the Public Suffix List.
(https://publicsuffix.org/learn/)
So, once they realized web browsers are all inherently flawed, their solution was to maintain a static list of websites.God I hate the web. The engineering equivalent of a car made of duct tape.
> Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain
A centralized list like this not just for domains as a whole (e.g. co.uk) but also specific sites (e.g. s3-object-lambda.eu-west-1.amazonaws.com) is both kind of crazy in that the list will bloat a lot over the years, as well as a security risk for any platform that needs this functionality but would prefer not to leak any details publicly.
We already have the concept of a .well-known directory that you can use, when talking to a specific site. Similarly, we know how you can nest subdomains, like c.b.a.x, and it's more or less certain that you can't create a subdomain b without the involvement of a, so it should be possible to walk the chain.
Example:
Maybe ship the domains with the browsers and such and leave generic sites like AWS or whatever to describe things themselves. Hell, maybe that could also have been a TXT record in DNS as well.> any platform that needs this functionality but would prefer not to leak any details publicly.
I’m not sure how you’d have this - it’s for the public facing side of user hosted content, surely that must be public?
> We already have the concept of a .well-known directory that you can use, when talking to a specific site.
But the point is to help identify dangerous sites, by definition you can’t just let the sites mark themselves as trustworthy and rotate around subdomains. If you have an approach that doesn’t have to trust the site, you also don’t need any definition at the top level you could just infer it.
It's actually exactly the same concept that come to mind for me. `SomeUser.geocities.com` is "tainted", along with `*.geocities.com`, so `geocities.com/.wellknown/i-am-tainted` is actually reasonable.
Although technically it might be better as `.wellknown/taint-regex` (now we have three problems), like `TAINT "*.sites.myhost.com" ; "myhost.com/uploads/*" ; ...`
I think we disagree on the problem.
The thing you want to avoid is this:
a.scamsite.com gets blocked so they just put their phishing pages on b.scamsite.com
The psl or your solution isn’t a “don’t trust subdomains” notification it’s “if one subdomain is bad, you should still trust the others” and the problem there is you can’t trust them.
You could combine the two, but you still need the suffix list or similar curation.
It's more like "provenance" of content. I broadcast my accountability of "myblog.com/posts/...", but would disavow "myblog.com/posts/.../#comments"
There's some ways of like "nofollow", but nothing systematic, and no "protocol" for disavowing paths, uploads, or fragments.
Back in the slashdot days, I thought of "blogs are the stationary of the internet", a way to more authoritatively declare that the content was yours... but interop is hard and unprofitable so walled gardens became the norm.
We just haven't had the benefit or forcing function which encourages a solution to "that stuff over there is less trusted than my stuff over here".
Maybe we're at the point where hosts of any kind MUST be responsible (or accountable) for any content originating from their domain? It kills indie/anonymous hosting, but puts a fine "KYC" point on distributing "evil" stuff on the internet?
It does smell very much like a feature that is currently implemented as a text file but will eventually need to grow to its own protocol, like, indeed, the hostfile becoming DNS.
One key difference between this list and standard DNS (at least as I understand it; maybe they added an extension to DNS I haven't seen) is the list requires independent attestation. You can't trust `foo.com` to just list its subdomains; that would be a trivial attack vector for a malware distributor to say "Oh hey, yeah, trustme.com is a public suffix; you shouldn't treat its subdomains as the same thing" and then spin up malware1.trustme.com, malware2.trustme.com, etc. Domain owners can't be the sole arbiter of whether their domain counts as a "public suffix" from the point of view of user safety.
It looks like Mozilla does use DNS to verify requests to join the list, at least.
Doing this DNS in the browser in real-time would be a performance challenge, though. PSL affects the scope of cookies (github.io is on the PSL, so a.github.io can't set a cookie that b.github.io can read). So the relevant PSL needs to be known before the first HTTP response comes back.I presume it has to be a curated list otherwise spammers would use it to evade blocks. Otherwise why not just use DNS?
Whois would be the choice. DNS’s less glamourous sibling, purpose built for delegated publication of accountability records
Whois isn't curated either.
Neither is nominating a third party for your parking fine.
The point is to get away from centralized gatekeepers, not establish more of them. A hierarchy of disavowal. It’s like cache invalidation for accountability.
If you don’t wanna be held responsible for something, you’d better be prepared to point the finger at someone whois.
> God I hate the web
This is mostly a browser security mistake but also partly a product of ICANN policy & the design of the domain system, so it's not just the web.
Also, the list isn't really that long, compared to, say, certificate transparency logs; now that's a truly mad solution.
Show me a platform not made out of duct tape and I'll show you a platform nobody uses.
regular cars?
Jeep just had an OTA update cause the car to shut down on the highway (it is rumored).
Before we put computers in cars, we had the myriad small things that would break (stuck doors, stuck windows, failed seals, leaking gaskets), a continuous stream of recalls for low-probability safety issues, and the occasional Gremlin or Pinto.
My favorite example is the Hyundai Elantra. They changed the alloy used in one of the parts in the undercarriage. Tested that model to death for a year, as they do, but their proving ground is in the southern United States.
Several winters later, it turns out that road salt attacks the hell out of that alloy and people have wheels flying off their cars in the middle of the road.
The Honda issue where setting a certain radio station, would brick the infotainment? That good enough?
> That good enough?
Not really. Does the car still drive? That sounds like a software bug; hardly indicative that the entire car is held together with duct tape, but a pretty bad bug non the less.
So i can't remember the specifics or find any references, but many years ago i remember reading about a car (prius maybe?) that would shut off and lock the doors when pulling away from a stop. (Ex: stopped at a red light, when it turns green the car would go far enough to cut off in the middle of an intersection then trap everyone inside.)
"This is Fine."
That's terrifying.
The browser still drives when Google throws up a safety warning.
It's just harder to drive to one house, and the homeowner is justifiably irritated about this.
More accurate: a mom-n-pop grocery store has its listing on Google Maps changed to PERMANENTLY CLOSED DUE TO TOXIC HEALTH HAZARDS because the mom-n-pop grocery store didn't submit Form 26B/Z to Google. There was never any health hazard, but now everyone thinks there is, and nobody can/will go there. The fact that Form 26B/Z exists at all is problematic, but what makes it terrible is the way it's used to punish businesses for not filling out a form they didn't know existed.
This is an excellent analogy because it is incumbent upon businesses to follow all the laws, including the ones they don't know about. That's one of the reasons "lawyer" is a profession.
Google doesn't have the force of law (it's in this context acting more like a Yelp: "1 star review --- our secret shopper showed up and the manager didn't give the secret 'we are not criminals' hand sign"), but the basic idea is the same: there is a complex web of interactions that can impact your online presence and experts in the field you can choose to hire for consulting or not.
Didn't used to be that way, but the web used to be a community of 100,000 people, not 5.6 billion. Everything gets more complicated when you add more people.
The other commenter's analogy of a small-business is better I think, the issue with the browser problem is that it doesn't hinder one person getting to one house, it hinders all persons getting to one place the owner _wants_ people to get to easily.
The browser issue can destroy a small business, one thing I think we can universally agree we don't want. If all of the people who come looking for it find it's being marked as malicious or just can't get there at all, they lose customers.
Worse yet, is that Google holds the keys because everyone uses Chrome, and you have to play their game by their rules just to keep breathing.
Here's the thing though: if someone else held the keys, the scenario would be the same unless there was no safe browsing protection. And if there were no safe browsing protection, we'd be trading one ill for another; small business owners facing a much steeper curve to compete vs. everyone being at more risk from malware actors.
I honestly don't immediately know how to weigh those risks against each other, but I'll note that this community likely underestimates the second one. Most web users are not nearly as tech- or socially-savvy as the average HN reader and the various methods of getting someone to a malware subdomain are increasingly sophisticated.
The road network is a much better analogy here.
Never heard of this. Link please?
Don't know about Honda, but there is this Mazda one [0] (Would not be surprised if it affected multiple vendors!)
[0] https://www.soundandvision.com/content/remembering-time-when...
Yikes. I missed that. Makes sense it wasn't just the station it was tuned to but the particular data they broadcasted; insane there was no way to power reset the system into a good state.
Admitting I'm old, but my HP-11C still gets pretty-regular use.
And judging by eBay prices, or the SwissMicros product line, I suspect I have plenty of company.
"The engineering equivalent of a car made of duct tape"
Kind of. But do you have a better proposition?
I'd probably say we ought to use DNS.
And while we’re at it, 1) mark domains as https-only, and 2) when root domains map to a subdomain (eg www).
I might amuse you to know hat we also already have a text file as a solution for https-only sites.
Cookies shouldn't be tied to domains at all, it's a kludge. They should be tied to cryptographic keypairs (client + server). If the web server needs a cookie, it should request one (in its reply to the client's first request for a given url; the client can submit again to "reply" to this "request"). The client can decide whether it wants to hand over cookie data, and can withhold it from servers that use different or invalid keys. The client can also sign the response. This solves many different security concerns, privacy concerns, and also eliminates the dependency on specific domain names.
I just came up with that in 2 minutes, so it might not be perfect, but you can see how with a little bit of work there's much better solutions than "I check for not-evil domain in list!"
> They should be tied to cryptographic keypairs (client + server).
So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link, for as long as the cookie lives (and users don't visit the website to get the rotated key).
> If the web server needs a cookie, it should request one
This adds a round-trip, which slows down the website on slow connections.
> the client can submit again to "reply" to this "request"
This requires significantly overhauling HTTP and load-balancers. The public-suffix list exists because it's an easy workaround that didn't take a decade to specify and implement.
> So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link
This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc). You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
> This adds a round-trip, which slows down the website on slow connections.
Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default. The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested. This gives the client much more control over when it leaks users' data.
> This requires significantly overhauling HTTP and load-balancers
No change is needed. Web applications already do all of this all the time. (example: the Location: header is frequently sent by web apps in response to specific requests, to say nothing of REST and its many different request and return methods/statuses/headers).
> The public-suffix list exists because it's an easy workaround
So the engine of modern commerce is just a collection of easy hacks. Fantastic.
> This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc).
An attacker who gets the TLS private key of a website can't use it easily, because they still need to fool users' browser into connecting to a server they control as the victim domain, which brings us to:
> You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
It's not. I can think of two ways to attack the DNS. Either 1. control or MITM of the victim's authoritative DNS server or 2. poison users' DNS cache.
Control/MITM of the authoritative server is not an option for everyone (only ISPs/backbone operators), and according to Cloudflare: "DNS poisoning attacks are not easy" (https://www.cloudflare.com/learning/dns/dns-cache-poisoning/)
> Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default
Yes, although adding more data and adding a round-trip have different impacts (high-bandwidth high-latency connections exist). Lots of cookies and more round-trips is always worse than lots of cookies and a fewer round-trips.
> The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested.
Everyone hate configuring cache, so in most cases site operators will leave it to a default "send everything", and we're back to square one.
> No change is needed.
I was thinking that servers need to remember state between the initial client request and when the client sends an other request with the cookies. But on second thought that's indeed not necessary.
> So the engine of modern commerce is just a collection of easy hacks. Fantastic.
I'm afraid so
There's at least a dozen different attacks on DNS, but the main ones regarding record validation include multiple types of spoofing and MITM (at both the DNS and IP level), cache poisoning, account takeover (of either the nameserver or registrar), DoS attack, etc.
Cache poisoning is the easiest method, and contrary to whatever Cloudflare says, it's trivial. The DNS transaction number is 16-bits. All you have to do is flood the shit out of the resolver with spoofed packets and eventually one of the transaction numbers will hit, and your attack is successful. It's low-bandwidth, takes at most a couple hours, and nobody notices. This is one of the many reasons you can't just trust whatever DNS says.
The choice of what HTTP messages to cache is not always a choice, as is the case with HSTS. But it could be made one if testing of this proposal (which again, I came up with in 2 minutes) showed better results one way or another.
But all this is moot anyway cuz nobody gives a crap.
A part of the issue is IMO that browsers have become ridiculously bloated everything-programs. You could take about 90% of that out and into dedicated tools and end up with something vastly saner and safer and not a lot less capable for all practical purposes. Instead, we collectively are OK with frosting this atrocious layer cake that is today's web with multiple flavors of security measures of sometimes questionable utility.
End of random rant.
"You could take about 90% of that out and into dedicated tools "
But then you would loose plattform independency, the main selling point of this atrocity.
Having all those APIs in a sandbox that mostly just work on billion devices is pretty powerful and a potential succesor to HTML would have to beat that, to be adopted.
The best thing to happen, that I can see, is that a sane subset crystalizes, that people start to use dominantly, with the rest becoming legacy, only maintained to have it still working.
But I do dream of a fresh rewrite of the web since university (and the web was way slimmer back then), but I got a bit more pragmatic and I think I understood now the massive problem of solving trusted human communication better. It ain't easy in the real world.
But do we need e.g serial port or raw USB access straight from a random website? Even WebRTC is a bit of a stretch. There is a lot of cruft in modern browsers that does little except increase attack surface.
This all just drives a need to come up with ever more tacked-on protection schemes because browsers have big targets painted on them.
> Even WebRTC is a bit of a stretch
You remove that, and videoconferencing (for business or person to person) has to rely on downloading an app, meaning whoever is behind the website has to release for 10-15 OSes now. Some already do, but not everyone has that budget so now there's a massive moat around it.
> But do we need e.g serial port or raw USB access straight from a random website
Being able to flash an IoT (e.g. ESP32) device from the browser is useful for a lot of people. For the "normies", there was also Stadia allowing you to flash their controller to be a generic Bluetooth/usb one on a website, using that webUSB. Without it Google would have had to release an app for multiple OSes, or more likely, would have just left the devices as paperweights. Also, you can use FIDO/U2F keys directly now, which is pretty good.
Browsers are the modern Excel, people complain that they do too much and you only need 20%. But it's a different 20% for everyone.
I'll flip that around on you: why oh why do we need to browsers to carry these security holes in them? The Stadia flasher is a good example: how do I know that a website doesn't contain a device flasher that will turn one of my connected devices into a malicious actor that will attempt to take over whatever machine it's plugged into?
You know because there is an explicit permission box that pops out and asks if you want to give this website access to a device, and asks you to select that device.
Same as your camera/microphone/location.
But that still gives completely unvetted direct access to the device to a website! People have been pointing to Itch.io games that supposedly require direct USB access. How hard is it to hide a script in there that reprograms a controller into something malicious?
If you download a executable from a website and run it .. pretty much the same thing?
If you give USB access, it is not really a website anymore, rather a app delivered through the web. I don't see a fundamental difference in trust.
I rather am able to verify the web based version easier and I certainly won't give access to a random website, just like I don't download random exes from websites.
Performance is lower, yes and well ... like I said, it is all a big mess. Just look at the global namespace in js. I still use it because of that power feature called plattform independence. What I release, people can (mostly) just use. I (mostly) don't care which OS the user has.
A fule thst lands on my hard drive is aztomatically scanned for malware. That same kindof protection isn't in place against malicious scripts downloaded by my broswer via an opaque HTTPS connection and run in process.
And we all know that non-technical users never just click Yes to make the annoying popup go away.
Itch.io games and controller support.
You have sites now that let you debug microcontrollers on your browser, super cool.
Same thing but with firmware updates in the browser. Cross platform, replaced a mess of ugly broken vendor tools.
While that's pretty convenient, I'm worried about what happens when the vendor shuts down the website. "Ugly broken vendor tools" can be run forever in a VM of an old system, but a website would be gone forever unless it's purely client-side and someone archived it.
Just because you can do something doesn't mean you should.
Your micro-controllers should use open standards for their debugging interface and not force people to use the vendor website.
WebRTC I use since many years and would miss it a lot. P2P is awesome.
WebUSB I don't use or would miss it right now, but .. the main potential use case is security and it sounds somewhat reasonable
"Use in multi-factor authentication
WebUSB in combination with special purpose devices and public identification registries can be used as key piece in an infrastructure scale solution to digital identity on the internet."
https://en.wikipedia.org/wiki/WebUSB
> But do we need e.g serial port or raw USB access straight from a random website?
But do we need audio, images, Canvas, WebGL, etc? The web could just be plain text and we’d get most of the “useful” content still, add images and you get a vast majority of it.
But the idea that the web is a rich environment that has all of these bells and whistles is a good thing imo. Yes there’s attack surface to consider, and it’s not negligible. However, the ability to connect so many different things opens up simple access to things that would otherwise require discrete apps and tooling.
One example that kind of blew my mind is that I wanted a controller overlay for my Twitch stream. After a short bit of looking, there isn’t even a plugin needed in OBS (streaming software). Instead, you add a Web View layer and point it to GamePad Viewer[1] and you’re done.
Serial and USB are possibly a boon for very specific users with very specific accessibility needs. Also, iirc some of the early iPhone jailbreaks worked via websites on a desktop with your iPhone plugged into usb. Sure these are niche, and could probably be served just as well or better with native apps, and web also makes the barrier to entry so much lower .
[1]: https://gamepadviewer.com/
> But do we need e.g serial port or raw USB access straight from a random website?
Yes. Regards, CIA, Mossad, FSB etc.
How else am I going to make a game in the browser that be controlled with a controller?
Every decent host OS already has a dedicated driver stack to provide game controller input to applications in a useful manner. Why the heck would you ship a reimplementation of that in JS in a website?
So that you can take input from countrollers that haven't been invented yet and won't fit the HID model.
If it hasn't been invented yet, you don't need driver software for it, do you? ;)
Anyway, in your scenario the controller would be essentially a one off and you'd be better off writing a native app to interface with it for the one computer this experiment will run on.
If it hasn't been invented yet we don't know the implications of giving a website access to it either.
And that's before realizing it's already a bad idea with existing devices because they were never designed for giving untrusted actors direct access.
That's why we have a privacy and security sandbox in browsers.
You don't, that's the point: not everything needs to be crammed into a browser.
Unlikely. The convenience incentives are far too high to leave features on the table.
Not unlike the programming language or the app (growing until it half-implements LISP or half-implements an email client), the browser will grow until it half-implements an operating system.
For everyone else, there's already w3m.
> Having all those APIs in a sandbox that mostly just work on billion devices is pretty powerful and a potential succesor to HTML would have to beat that, to be adopted.
I think the giant major downside, is that they've written a rootkit that runs on everything, and to try to make up for that they want to make it so only sites they allow can run.
It's not really very powerful at all if nobody can use it, at that point you are better off just not bothering with it at all.
The Internet may remain, but the Web may really be dead.
"It's not really very powerful at all if nobody can use it"
But people do use it, like the both of us right now?
People also use maps, do online banking, play games, start complex interactive learning environments, collaborate in real time on documents etc.
All of that works right now.
> to try to make up for that they want to make it so only sites they allow can run
What do you mean, you can run whatever you want on localhost, and it's quite easy to host whatever you want for whoever you want too. Maybe the biggest modern added barrier to entry is that having TLS is strongly encouraged/even needed for some things, but this is an easily solved problem.
The blog post and several anecdotes in the comments prove otherwise
Not sure if it counts but I've been enjoying librewolf. I believe just a stripped down firefox.
>A part of the issue is IMO that browsers have become ridiculously bloated everything-programs.
I don't see how that solves the issue that PSL tries to fix. I was a script kiddy hosting neopets phishing pages on free cpanel servers from <random>.ripway.com back in 2007. Browsers were way less capable then.
PSL and the way cookies work is just part of the mess. A new approach could solve that in a different way, taking into account all the experience we had with scriptkiddies and professional scammers and pishers since then. But I also don't really have an idea where and how to start.
And of course, if the new solution completely invalidates old sites, it just won't get picked up. People prefer slightly broken but accessible to better designed but inaccessible.
> People prefer slightly broken but accessible to better designed but inaccessible.
We live in world where whatever faang adopts is de facto a standard. Accessible these days means google/gmail/facebook/instagram/tiktok works. Everything else is usually forced to follow along.
People will adopt whatever gives them access to their daily dose of doomscrolling and then complain about rather crucial part of their lives like online banking not working.
> And of course, if the new solution completely invalidates old sites, it just won't get picked up.
Old sites don't matter, only high-traffic sites riddled with dark patterns matter. That's the reality, even if it is harsh.
> People prefer slightly broken but accessible to better designed but inaccessible.
It's not even broken as the edge cases are addressed by ad-hoc solutions.
OP is complaining about global infrastructure not having a pristine design. At best it's a complain over a desirable trait. It's hardly a reason to pull the Jr developer card and mindlessly advocate for throwing everything out and starting over.
2007 you say and less capable you say?!
Try 90s! We had to fight off ActiveX Plugins left and right in the good olde Internet Explorer! Yarr! ;-)
Are you saying we should make a <Unix Equivalent Of A Browser?> A large set of really simple tools that each do one thing really really really pedantically well?
This might be what's needed to break out of the current local optimum.
Maybe it's time to revive something like the uzbl[1] project, or start something similar.
[1] https://www.uzbl.org/
I haven't thought of it that way, but that might be a solution.
There was an attempt in that direction.
https://www.uzbl.org/
You are right from a technical point, I think, but in reality - how would one begin to make that change?
I'm under the impression that CORS largely solves it?
which is still much too new to be able to shut down the PSL of course. but maybe in 2050.
Since this is being downvoted: no, I'm quite serious.
CORS lets sites define their own security boundaries between subdomains, with mutual validation. If you're hosting user content in a subdomain, just don't allow-origin it: that is a clear statement that it's not "the same site". PSL plays absolutely no part in that logic, it seems clear to me that it's at least in part intended to replace the PSL.
Do other sites (like google's safety checks) use CORS for this purpose? Dunno. Seems like they could though? Or am I missing something?
I think we lost the web somewhere between PageRank and JavaScript. Up to there it was just linked documents and it was mostly fine.
Why is it a centrally maintained list of domains, when there is a whole extensible system for attaching metadata to domain names?
I love the web. It's the corporate capitalistic ad fueled and govt censorship web that is the problem.
> God I hate the web. The engineering equivalent of a car made of duct tape.
Most of the complex thing I have seen being made (or contributed to) needed duct tape sooner or later. Engineering is the art of trade-offs, of adapting to changing requirements (that can appear due to uncontrollable events external to the project), technology and costs.
Related, this is how the first long distance automobile trip was done: https://en.wikipedia.org/wiki/Bertha_Benz#First_cross-countr... . Seems to me it had quite some duct tape.
Why would you compare Web to that? A first fax message would be more appropriate comparison.
Web is not a new thing and hardly a technical experiment of a few people any more.
If you add the time since announcing the concept of Web to that trip date, you have a very decent established industry already. With many sport and mass production designs:
https://en.wikipedia.org/wiki/Category:Cars_introduced_in_19...
For me the web is something along the lines at the definition of: https://en.wikipedia.org/wiki/World_Wide_Web to sum up "...universal linked information system...". I think the fax misses many aspects of the core definition to be a good comparison.
Not sure what is your point about "decent established industry" if we relate to "duct tape". I see two possibilities:
a) you imply that the web does not have a decent established industry (but I would guess not).
b) you would claim that there was no "duct tape" in 1924 car industry. I am no expert but I would refer you to the article describing what was the procedure to start the car at https://www.quora.com/How-do-people-start-their-cars-in-the-..., to quote:
> Typical cold-start routine (common 1930s workflow)
> 1. Set hand choke (pull knob).
> 2. Set throttle lever to slight fast‑idle.
> 3. Retard spark if manual advance present.
> 4. Engage starter (electric) or use hand crank.
> 5. Once running, push choke in gradually, advance spark, reduce throttle.
Not sure about your opinion but compared to what a car's objective is (move from point A to point B) to me that sounds rather involved. Not sure if it qualifies as "duct-tape" but definitely it is not a "nicely implemented system that just works".
To resume my point: I think on average progress is slower and harder than people think. And that is mostly because people do not have exposure to the work people are doing to improve things until something can become more "widely available".
That's the nature of decentralised control. It's not just DNS, phone numbers work in the same way.
All web encryption is backed by static list of root certs each browser maintains.
Idk any other way to solve it for the general public (ideally each user would probably pick what root certs they trust), but it does seem crazy.
We already have a solution to solve it: DNS-based Authentication of Named Entities (DANE)
This solution is even more obvious today where most certificates are just DNS lookups with extra steps.
What we need is a web made in a similar way to the wicker-bodied cars of yesteryear
I'm not sure I'm following what inherent flaw you are suggesting browsers had that the public suffix list originators knew they had.
Wait until you learn about the HSTS preload list.