Cookies shouldn't be tied to domains at all, it's a kludge. They should be tied to cryptographic keypairs (client + server). If the web server needs a cookie, it should request one (in its reply to the client's first request for a given url; the client can submit again to "reply" to this "request"). The client can decide whether it wants to hand over cookie data, and can withhold it from servers that use different or invalid keys. The client can also sign the response. This solves many different security concerns, privacy concerns, and also eliminates the dependency on specific domain names.
I just came up with that in 2 minutes, so it might not be perfect, but you can see how with a little bit of work there's much better solutions than "I check for not-evil domain in list!"
> They should be tied to cryptographic keypairs (client + server).
So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link, for as long as the cookie lives (and users don't visit the website to get the rotated key).
> If the web server needs a cookie, it should request one
This adds a round-trip, which slows down the website on slow connections.
> the client can submit again to "reply" to this "request"
This requires significantly overhauling HTTP and load-balancers. The public-suffix list exists because it's an easy workaround that didn't take a decade to specify and implement.
> So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link
This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc). You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
> This adds a round-trip, which slows down the website on slow connections.
Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default. The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested. This gives the client much more control over when it leaks users' data.
> This requires significantly overhauling HTTP and load-balancers
No change is needed. Web applications already do all of this all the time. (example: the Location: header is frequently sent by web apps in response to specific requests, to say nothing of REST and its many different request and return methods/statuses/headers).
> The public-suffix list exists because it's an easy workaround
So the engine of modern commerce is just a collection of easy hacks. Fantastic.
> This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc).
An attacker who gets the TLS private key of a website can't use it easily, because they still need to fool users' browser into connecting to a server they control as the victim domain, which brings us to:
> You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
It's not. I can think of two ways to attack the DNS. Either 1. control or MITM of the victim's authoritative DNS server or 2. poison users' DNS cache.
> Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default
Yes, although adding more data and adding a round-trip have different impacts (high-bandwidth high-latency connections exist). Lots of cookies and more round-trips is always worse than lots of cookies and a fewer round-trips.
> The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested.
Everyone hate configuring cache, so in most cases site operators will leave it to a default "send everything", and we're back to square one.
> No change is needed.
I was thinking that servers need to remember state between the initial client request and when the client sends an other request with the cookies. But on second thought that's indeed not necessary.
> So the engine of modern commerce is just a collection of easy hacks. Fantastic.
There's at least a dozen different attacks on DNS, but the main ones regarding record validation include multiple types of spoofing and MITM (at both the DNS and IP level), cache poisoning, account takeover (of either the nameserver or registrar), DoS attack, etc.
Cache poisoning is the easiest method, and contrary to whatever Cloudflare says, it's trivial. The DNS transaction number is 16-bits. All you have to do is flood the shit out of the resolver with spoofed packets and eventually one of the transaction numbers will hit, and your attack is successful. It's low-bandwidth, takes at most a couple hours, and nobody notices. This is one of the many reasons you can't just trust whatever DNS says.
The choice of what HTTP messages to cache is not always a choice, as is the case with HSTS. But it could be made one if testing of this proposal (which again, I came up with in 2 minutes) showed better results one way or another.
But all this is moot anyway cuz nobody gives a crap.
A part of the issue is IMO that browsers have become ridiculously bloated everything-programs. You could take about 90% of that out and into dedicated tools and end up with something vastly saner and safer and not a lot less capable for all practical purposes. Instead, we collectively are OK with frosting this atrocious layer cake that is today's web with multiple flavors of security measures of sometimes questionable utility.
"You could take about 90% of that out and into dedicated tools "
But then you would loose plattform independency, the main selling point of this atrocity.
Having all those APIs in a sandbox that mostly just work on billion devices is pretty powerful and a potential succesor to HTML would have to beat that, to be adopted.
The best thing to happen, that I can see, is that a sane subset crystalizes, that people start to use dominantly, with the rest becoming legacy, only maintained to have it still working.
But I do dream of a fresh rewrite of the web since university (and the web was way slimmer back then), but I got a bit more pragmatic and I think I understood now the massive problem of solving trusted human communication better. It ain't easy in the real world.
But do we need e.g serial port or raw USB access straight from a random website? Even WebRTC is a bit of a stretch. There is a lot of cruft in modern browsers that does little except increase attack surface.
This all just drives a need to come up with ever more tacked-on protection schemes because browsers have big targets painted on them.
You remove that, and videoconferencing (for business or person to person) has to rely on downloading an app, meaning whoever is behind the website has to release for 10-15 OSes now. Some already do, but not everyone has that budget so now there's a massive moat around it.
> But do we need e.g serial port or raw USB access straight from a random website
Being able to flash an IoT (e.g. ESP32) device from the browser is useful for a lot of people. For the "normies", there was also Stadia allowing you to flash their controller to be a generic Bluetooth/usb one on a website, using that webUSB. Without it Google would have had to release an app for multiple OSes, or more likely, would have just left the devices as paperweights. Also, you can use FIDO/U2F keys directly now, which is pretty good.
Browsers are the modern Excel, people complain that they do too much and you only need 20%. But it's a different 20% for everyone.
I'll flip that around on you: why oh why do we need to browsers to carry these security holes in them? The Stadia flasher is a good example: how do I know that a website doesn't contain a device flasher that will turn one of my connected devices into a malicious actor that will attempt to take over whatever machine it's plugged into?
You know because there is an explicit permission box that pops out and asks if you want to give this website access to a device, and asks you to select that device.
But that still gives completely unvetted direct access to the device to a website! People have been pointing to Itch.io games that supposedly require direct USB access. How hard is it to hide a script in there that reprograms a controller into something malicious?
If you download a executable from a website and run it .. pretty much the same thing?
If you give USB access, it is not really a website anymore, rather a app delivered through the web. I don't see a fundamental difference in trust.
I rather am able to verify the web based version easier and I certainly won't give access to a random website, just like I don't download random exes from websites.
Performance is lower, yes and well ... like I said, it is all a big mess. Just look at the global namespace in js. I still use it because of that power feature called plattform independence. What I release, people can (mostly) just use. I (mostly) don't care which OS the user has.
A fule thst lands on my hard drive is aztomatically scanned for malware. That same kindof protection isn't in place against malicious scripts downloaded by my broswer via an opaque HTTPS connection and run in process.
While that's pretty convenient, I'm worried about what happens when the vendor shuts down the website. "Ugly broken vendor tools" can be run forever in a VM of an old system, but a website would be gone forever unless it's purely client-side and someone archived it.
WebRTC I use since many years and would miss it a lot. P2P is awesome.
WebUSB I don't use or would miss it right now, but .. the main potential use case is security and it sounds somewhat reasonable
"Use in multi-factor authentication
WebUSB in combination with special purpose devices and public identification registries can be used as key piece in an infrastructure scale solution to digital identity on the internet."
> But do we need e.g serial port or raw USB access straight from a random website?
But do we need audio, images, Canvas, WebGL, etc? The web could just be plain text and we’d get most of the “useful” content still, add images and you get a vast majority of it.
But the idea that the web is a rich environment that has all of these bells and whistles is a good thing imo. Yes there’s attack surface to consider, and it’s not negligible. However, the ability to connect so many different things opens up simple access to things that would otherwise require discrete apps and tooling.
One example that kind of blew my mind is that I wanted a controller overlay for my Twitch stream. After a short bit of looking, there isn’t even a plugin needed in OBS (streaming software). Instead, you add a Web View layer and point it to GamePad Viewer[1] and you’re done.
Serial and USB are possibly a boon for very specific users with very specific accessibility needs. Also, iirc some of the early iPhone jailbreaks worked via websites on a desktop with your iPhone plugged into usb. Sure these are niche, and could probably be served just as well or better with native apps, and web also makes the barrier to entry so much lower .
Every decent host OS already has a dedicated driver stack to provide game controller input to applications in a useful manner. Why the heck would you ship a reimplementation of that in JS in a website?
If it hasn't been invented yet, you don't need driver software for it, do you? ;)
Anyway, in your scenario the controller would be essentially a one off and you'd be better off writing a native app to interface with it for the one computer this experiment will run on.
Unlikely. The convenience incentives are far too high to leave features on the table.
Not unlike the programming language or the app (growing until it half-implements LISP or half-implements an email client), the browser will grow until it half-implements an operating system.
> Having all those APIs in a sandbox that mostly just work on billion devices is pretty powerful and a potential succesor to HTML would have to beat that, to be adopted.
I think the giant major downside, is that they've written a rootkit that runs on everything, and to try to make up for that they want to make it so only sites they allow can run.
It's not really very powerful at all if nobody can use it, at that point you are better off just not bothering with it at all.
The Internet may remain, but the Web may really be dead.
> to try to make up for that they want to make it so only sites they allow can run
What do you mean, you can run whatever you want on localhost, and it's quite easy to host whatever you want for whoever you want too. Maybe the biggest modern added barrier to entry is that having TLS is strongly encouraged/even needed for some things, but this is an easily solved problem.
>A part of the issue is IMO that browsers have become ridiculously bloated everything-programs.
I don't see how that solves the issue that PSL tries to fix. I was a script kiddy hosting neopets phishing pages on free cpanel servers from <random>.ripway.com back in 2007. Browsers were way less capable then.
PSL and the way cookies work is just part of the mess. A new approach could solve that in a different way, taking into account all the experience we had with scriptkiddies and professional scammers and pishers since then. But I also don't really have an idea where and how to start.
And of course, if the new solution completely invalidates old sites, it just won't get picked up. People prefer slightly broken but accessible to better designed but inaccessible.
> People prefer slightly broken but accessible to better designed but inaccessible.
We live in world where whatever faang adopts is de facto a standard. Accessible these days means google/gmail/facebook/instagram/tiktok works. Everything else is usually forced to follow along.
People will adopt whatever gives them access to their daily dose of doomscrolling and then complain about rather crucial part of their lives like online banking not working.
> And of course, if the new solution completely invalidates old sites, it just won't get picked up.
Old sites don't matter, only high-traffic sites riddled with dark patterns matter. That's the reality, even if it is harsh.
> People prefer slightly broken but accessible to better designed but inaccessible.
It's not even broken as the edge cases are addressed by ad-hoc solutions.
OP is complaining about global infrastructure not having a pristine design. At best it's a complain over a desirable trait. It's hardly a reason to pull the Jr developer card and mindlessly advocate for throwing everything out and starting over.
Are you saying we should make a <Unix Equivalent Of A Browser?> A large set of really simple tools that each do one thing really really really pedantically well?
This might be what's needed to break out of the current local optimum.
Since this is being downvoted: no, I'm quite serious.
CORS lets sites define their own security boundaries between subdomains, with mutual validation. If you're hosting user content in a subdomain, just don't allow-origin it: that is a clear statement that it's not "the same site". PSL plays absolutely no part in that logic, it seems clear to me that it's at least in part intended to replace the PSL.
Do other sites (like google's safety checks) use CORS for this purpose? Dunno. Seems like they could though? Or am I missing something?
I'd probably say we ought to use DNS.
And while we’re at it, 1) mark domains as https-only, and 2) when root domains map to a subdomain (eg www).
I might amuse you to know hat we also already have a text file as a solution for https-only sites.
Cookies shouldn't be tied to domains at all, it's a kludge. They should be tied to cryptographic keypairs (client + server). If the web server needs a cookie, it should request one (in its reply to the client's first request for a given url; the client can submit again to "reply" to this "request"). The client can decide whether it wants to hand over cookie data, and can withhold it from servers that use different or invalid keys. The client can also sign the response. This solves many different security concerns, privacy concerns, and also eliminates the dependency on specific domain names.
I just came up with that in 2 minutes, so it might not be perfect, but you can see how with a little bit of work there's much better solutions than "I check for not-evil domain in list!"
> They should be tied to cryptographic keypairs (client + server).
So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link, for as long as the cookie lives (and users don't visit the website to get the rotated key).
> If the web server needs a cookie, it should request one
This adds a round-trip, which slows down the website on slow connections.
> the client can submit again to "reply" to this "request"
This requires significantly overhauling HTTP and load-balancers. The public-suffix list exists because it's an easy workaround that didn't take a decade to specify and implement.
> So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link
This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc). You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
> This adds a round-trip, which slows down the website on slow connections.
Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default. The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested. This gives the client much more control over when it leaks users' data.
> This requires significantly overhauling HTTP and load-balancers
No change is needed. Web applications already do all of this all the time. (example: the Location: header is frequently sent by web apps in response to specific requests, to say nothing of REST and its many different request and return methods/statuses/headers).
> The public-suffix list exists because it's an easy workaround
So the engine of modern commerce is just a collection of easy hacks. Fantastic.
> This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc).
An attacker who gets the TLS private key of a website can't use it easily, because they still need to fool users' browser into connecting to a server they control as the victim domain, which brings us to:
> You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
It's not. I can think of two ways to attack the DNS. Either 1. control or MITM of the victim's authoritative DNS server or 2. poison users' DNS cache.
Control/MITM of the authoritative server is not an option for everyone (only ISPs/backbone operators), and according to Cloudflare: "DNS poisoning attacks are not easy" (https://www.cloudflare.com/learning/dns/dns-cache-poisoning/)
> Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default
Yes, although adding more data and adding a round-trip have different impacts (high-bandwidth high-latency connections exist). Lots of cookies and more round-trips is always worse than lots of cookies and a fewer round-trips.
> The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested.
Everyone hate configuring cache, so in most cases site operators will leave it to a default "send everything", and we're back to square one.
> No change is needed.
I was thinking that servers need to remember state between the initial client request and when the client sends an other request with the cookies. But on second thought that's indeed not necessary.
> So the engine of modern commerce is just a collection of easy hacks. Fantastic.
I'm afraid so
There's at least a dozen different attacks on DNS, but the main ones regarding record validation include multiple types of spoofing and MITM (at both the DNS and IP level), cache poisoning, account takeover (of either the nameserver or registrar), DoS attack, etc.
Cache poisoning is the easiest method, and contrary to whatever Cloudflare says, it's trivial. The DNS transaction number is 16-bits. All you have to do is flood the shit out of the resolver with spoofed packets and eventually one of the transaction numbers will hit, and your attack is successful. It's low-bandwidth, takes at most a couple hours, and nobody notices. This is one of the many reasons you can't just trust whatever DNS says.
The choice of what HTTP messages to cache is not always a choice, as is the case with HSTS. But it could be made one if testing of this proposal (which again, I came up with in 2 minutes) showed better results one way or another.
But all this is moot anyway cuz nobody gives a crap.
A part of the issue is IMO that browsers have become ridiculously bloated everything-programs. You could take about 90% of that out and into dedicated tools and end up with something vastly saner and safer and not a lot less capable for all practical purposes. Instead, we collectively are OK with frosting this atrocious layer cake that is today's web with multiple flavors of security measures of sometimes questionable utility.
End of random rant.
"You could take about 90% of that out and into dedicated tools "
But then you would loose plattform independency, the main selling point of this atrocity.
Having all those APIs in a sandbox that mostly just work on billion devices is pretty powerful and a potential succesor to HTML would have to beat that, to be adopted.
The best thing to happen, that I can see, is that a sane subset crystalizes, that people start to use dominantly, with the rest becoming legacy, only maintained to have it still working.
But I do dream of a fresh rewrite of the web since university (and the web was way slimmer back then), but I got a bit more pragmatic and I think I understood now the massive problem of solving trusted human communication better. It ain't easy in the real world.
But do we need e.g serial port or raw USB access straight from a random website? Even WebRTC is a bit of a stretch. There is a lot of cruft in modern browsers that does little except increase attack surface.
This all just drives a need to come up with ever more tacked-on protection schemes because browsers have big targets painted on them.
> Even WebRTC is a bit of a stretch
You remove that, and videoconferencing (for business or person to person) has to rely on downloading an app, meaning whoever is behind the website has to release for 10-15 OSes now. Some already do, but not everyone has that budget so now there's a massive moat around it.
> But do we need e.g serial port or raw USB access straight from a random website
Being able to flash an IoT (e.g. ESP32) device from the browser is useful for a lot of people. For the "normies", there was also Stadia allowing you to flash their controller to be a generic Bluetooth/usb one on a website, using that webUSB. Without it Google would have had to release an app for multiple OSes, or more likely, would have just left the devices as paperweights. Also, you can use FIDO/U2F keys directly now, which is pretty good.
Browsers are the modern Excel, people complain that they do too much and you only need 20%. But it's a different 20% for everyone.
I'll flip that around on you: why oh why do we need to browsers to carry these security holes in them? The Stadia flasher is a good example: how do I know that a website doesn't contain a device flasher that will turn one of my connected devices into a malicious actor that will attempt to take over whatever machine it's plugged into?
You know because there is an explicit permission box that pops out and asks if you want to give this website access to a device, and asks you to select that device.
Same as your camera/microphone/location.
But that still gives completely unvetted direct access to the device to a website! People have been pointing to Itch.io games that supposedly require direct USB access. How hard is it to hide a script in there that reprograms a controller into something malicious?
If you download a executable from a website and run it .. pretty much the same thing?
If you give USB access, it is not really a website anymore, rather a app delivered through the web. I don't see a fundamental difference in trust.
I rather am able to verify the web based version easier and I certainly won't give access to a random website, just like I don't download random exes from websites.
Performance is lower, yes and well ... like I said, it is all a big mess. Just look at the global namespace in js. I still use it because of that power feature called plattform independence. What I release, people can (mostly) just use. I (mostly) don't care which OS the user has.
A fule thst lands on my hard drive is aztomatically scanned for malware. That same kindof protection isn't in place against malicious scripts downloaded by my broswer via an opaque HTTPS connection and run in process.
And we all know that non-technical users never just click Yes to make the annoying popup go away.
Itch.io games and controller support.
You have sites now that let you debug microcontrollers on your browser, super cool.
Same thing but with firmware updates in the browser. Cross platform, replaced a mess of ugly broken vendor tools.
While that's pretty convenient, I'm worried about what happens when the vendor shuts down the website. "Ugly broken vendor tools" can be run forever in a VM of an old system, but a website would be gone forever unless it's purely client-side and someone archived it.
Just because you can do something doesn't mean you should.
Your micro-controllers should use open standards for their debugging interface and not force people to use the vendor website.
WebRTC I use since many years and would miss it a lot. P2P is awesome.
WebUSB I don't use or would miss it right now, but .. the main potential use case is security and it sounds somewhat reasonable
"Use in multi-factor authentication
WebUSB in combination with special purpose devices and public identification registries can be used as key piece in an infrastructure scale solution to digital identity on the internet."
https://en.wikipedia.org/wiki/WebUSB
> But do we need e.g serial port or raw USB access straight from a random website?
But do we need audio, images, Canvas, WebGL, etc? The web could just be plain text and we’d get most of the “useful” content still, add images and you get a vast majority of it.
But the idea that the web is a rich environment that has all of these bells and whistles is a good thing imo. Yes there’s attack surface to consider, and it’s not negligible. However, the ability to connect so many different things opens up simple access to things that would otherwise require discrete apps and tooling.
One example that kind of blew my mind is that I wanted a controller overlay for my Twitch stream. After a short bit of looking, there isn’t even a plugin needed in OBS (streaming software). Instead, you add a Web View layer and point it to GamePad Viewer[1] and you’re done.
Serial and USB are possibly a boon for very specific users with very specific accessibility needs. Also, iirc some of the early iPhone jailbreaks worked via websites on a desktop with your iPhone plugged into usb. Sure these are niche, and could probably be served just as well or better with native apps, and web also makes the barrier to entry so much lower .
[1]: https://gamepadviewer.com/
> But do we need e.g serial port or raw USB access straight from a random website?
Yes. Regards, CIA, Mossad, FSB etc.
How else am I going to make a game in the browser that be controlled with a controller?
Every decent host OS already has a dedicated driver stack to provide game controller input to applications in a useful manner. Why the heck would you ship a reimplementation of that in JS in a website?
So that you can take input from countrollers that haven't been invented yet and won't fit the HID model.
If it hasn't been invented yet, you don't need driver software for it, do you? ;)
Anyway, in your scenario the controller would be essentially a one off and you'd be better off writing a native app to interface with it for the one computer this experiment will run on.
If it hasn't been invented yet we don't know the implications of giving a website access to it either.
And that's before realizing it's already a bad idea with existing devices because they were never designed for giving untrusted actors direct access.
That's why we have a privacy and security sandbox in browsers.
You don't, that's the point: not everything needs to be crammed into a browser.
Unlikely. The convenience incentives are far too high to leave features on the table.
Not unlike the programming language or the app (growing until it half-implements LISP or half-implements an email client), the browser will grow until it half-implements an operating system.
For everyone else, there's already w3m.
> Having all those APIs in a sandbox that mostly just work on billion devices is pretty powerful and a potential succesor to HTML would have to beat that, to be adopted.
I think the giant major downside, is that they've written a rootkit that runs on everything, and to try to make up for that they want to make it so only sites they allow can run.
It's not really very powerful at all if nobody can use it, at that point you are better off just not bothering with it at all.
The Internet may remain, but the Web may really be dead.
"It's not really very powerful at all if nobody can use it"
But people do use it, like the both of us right now?
People also use maps, do online banking, play games, start complex interactive learning environments, collaborate in real time on documents etc.
All of that works right now.
> to try to make up for that they want to make it so only sites they allow can run
What do you mean, you can run whatever you want on localhost, and it's quite easy to host whatever you want for whoever you want too. Maybe the biggest modern added barrier to entry is that having TLS is strongly encouraged/even needed for some things, but this is an easily solved problem.
The blog post and several anecdotes in the comments prove otherwise
Not sure if it counts but I've been enjoying librewolf. I believe just a stripped down firefox.
>A part of the issue is IMO that browsers have become ridiculously bloated everything-programs.
I don't see how that solves the issue that PSL tries to fix. I was a script kiddy hosting neopets phishing pages on free cpanel servers from <random>.ripway.com back in 2007. Browsers were way less capable then.
PSL and the way cookies work is just part of the mess. A new approach could solve that in a different way, taking into account all the experience we had with scriptkiddies and professional scammers and pishers since then. But I also don't really have an idea where and how to start.
And of course, if the new solution completely invalidates old sites, it just won't get picked up. People prefer slightly broken but accessible to better designed but inaccessible.
> People prefer slightly broken but accessible to better designed but inaccessible.
We live in world where whatever faang adopts is de facto a standard. Accessible these days means google/gmail/facebook/instagram/tiktok works. Everything else is usually forced to follow along.
People will adopt whatever gives them access to their daily dose of doomscrolling and then complain about rather crucial part of their lives like online banking not working.
> And of course, if the new solution completely invalidates old sites, it just won't get picked up.
Old sites don't matter, only high-traffic sites riddled with dark patterns matter. That's the reality, even if it is harsh.
> People prefer slightly broken but accessible to better designed but inaccessible.
It's not even broken as the edge cases are addressed by ad-hoc solutions.
OP is complaining about global infrastructure not having a pristine design. At best it's a complain over a desirable trait. It's hardly a reason to pull the Jr developer card and mindlessly advocate for throwing everything out and starting over.
2007 you say and less capable you say?!
Try 90s! We had to fight off ActiveX Plugins left and right in the good olde Internet Explorer! Yarr! ;-)
Are you saying we should make a <Unix Equivalent Of A Browser?> A large set of really simple tools that each do one thing really really really pedantically well?
This might be what's needed to break out of the current local optimum.
Maybe it's time to revive something like the uzbl[1] project, or start something similar.
[1] https://www.uzbl.org/
I haven't thought of it that way, but that might be a solution.
There was an attempt in that direction.
https://www.uzbl.org/
You are right from a technical point, I think, but in reality - how would one begin to make that change?
I'm under the impression that CORS largely solves it?
which is still much too new to be able to shut down the PSL of course. but maybe in 2050.
Since this is being downvoted: no, I'm quite serious.
CORS lets sites define their own security boundaries between subdomains, with mutual validation. If you're hosting user content in a subdomain, just don't allow-origin it: that is a clear statement that it's not "the same site". PSL plays absolutely no part in that logic, it seems clear to me that it's at least in part intended to replace the PSL.
Do other sites (like google's safety checks) use CORS for this purpose? Dunno. Seems like they could though? Or am I missing something?