Yeah, redoing the defaults would probably be good.
On the other hand, I tried doing a Google search with javascript disabled today, and I learned that Google doesn't even allow this. (I also thought "maybe that's just something they try to pawn off on mobile browsers", but no, it's not allowed on desktop either.)
So the state of things for "how should web browsers work?" seems to be getting worse, not better.
Wow, I used to be able to search google even from terminal browsers like 'elinks'
I used elinks once to find a solution to an issue where the login screen was broken after an upgrade. I was able to switch to a virtual console, find out about the issue, identify the commands to fix the issue, and use them to resolve the issue.
I think it still works if you set your user agent to something like lynx. I had a custom UA set for Google search in Firefox just for this purpose and to disable AI overviews.
I just tried with the "links" browser and I get a "Update your browser. Your browser isn't supported anymore. To continue your search, upgrade to a recent version"
The reference of robots.txt offer a good way to define specific behavior for the whole domain, as example. Something like that for security could be enough for large amount of websites.
Also, a new header like “sec-policy: foo-url” may be a clean way to move away that definitions from the app+web+proxy+cdn mesh to a fixed clear point.
I reply myself because I've found that idea already porposed:
"Origin policy was a proposal for a web platform mechanism that allows origins to set their origin-wide configuration in a central location, instead of using per-response HTTP headers." - https://github.com/WICG/origin-policy
But their status is "[On hold for now]" since, at least, three years ago.
These files are just ignored by everything. We dont need .txt files, we need good defaults.