This is coming close to WWW's original vision because the very first web browser was also an editor. Tim Berners-Lee's application on the NeXT was basically a wrapper for the operating system's built-in rich text editing class named TextView. (It later became NSTextView on Apple's Mac OS X and still powers the TextEdit app on Mac.)
We lost editing for two reasons:
1) The HTTP PUT method didn't exist yet, so edited HTML files could only be saved locally.
2) Mosaic built a cross-platform web browser that defined what the WWW was for 99% of users, and they didn't include editing because that would have been too complex to build from scratch in their multi-platform code base.
Making a more read/annotate/write web is near and dear to my heart. There's a lot I find admirable - noble about pages like Hyperclay!
But also, it's a distinctly different answer for each page to build its own toolkit for the user (Hyperclay) vs TBL's read-write web. The user-agent ought, imo, afford standard tools that are going to work across web pages, that extend the user agency whatever site they are visiting.
Yes, I agree. My dream would be to one day work on a browser and integrate Hyperclay into it. I believe web apps have been around long enough as a core web technology that browsers should ship with a local web host, knowledge of what a user and user account is, and the ability to persist to disk whatever the user chooses.
In a similar vein, it looks like there is a working group for linked web storage at:
https://www.w3.org/groups/wg/lws/
That would likely have some overlap.
If you were to have an accepted w3c proposal and working implementation in local browser forks, you could potentially chat with the browser teams to add the experimental feature first through a flag users would manually have to turn on, and then later potentially it could get integrated.
> Making a more read/annotate/write web is near and dear to my heart
Isn't that basically Wikipedia? I can't imagine a much simpler system that could work at modern web scale.
I think the scale difference is the whole point. Wikipedia has billions of active users, while these Hyperclay-style persistent documents have only a few.
Saying "we don't need this because Wikipedia already solved it" is kind of like saying in 1976: "Nobody needs the Apple II, we already have IBM mainframes that have solved every useful problem in computing much better."
To add to that the W3C maintained the Amaya "browser", or web editor how they liked to call it, for like a decade and a half, as their vision for the web.
I think it was not just an appealing idea but Amaya itself was a solid implementation for a "testbed" (again, their words).
I can see why it died but I still think it is a bit of a shame it did.
I had never heard of Amaya before.
> It supports HTML 4.01, XHTML 1.0, XHTML Basic, XHTML 1.1, HTTP 1.1, MathML 2.0, many CSS 2 features, and SVG.
Perfect. Doesn't need anything else.
Amaya is great when you just need to do some quick edits.
Saving files locally was the same as saving files on the web in the original TBL context.
Imagine having a nice UNIX workstation on your desk at a university. This would resolve to machine.department.university.ac.uk rather than be hidden behind a router. If you wanted then you could run an x window on it or transfer files to and from it.
With standard issue Netscape of the era you could save an HTML file locally and it be fully accessible anywhere on the web.
The university would have the skills to setup the network for this, which was a difficult skill at the time.
In reality you would not save everything locally. The main department server would have a network share for you that would be mounted with NFS. So your files would be 'saved locally' over the NFS share to resolve to department.university.ac.uk/user.
You could also login to any workstation in the department for it to mount your NFS shares, with PCs of the era usually capable of running x windows and NFS in the university setting.
Servers physically existed in the server room rather than in the cloud.
I much preferred this model as, on SGI workstations, you had it all working out of the box. All you needed was some help with the networking.
Also important is that the web was originally all about structuring information rather than hacking it to make it pretty. It was up to the client to sort out the presentation, not the server.
In time we lost the ability to structure information in part because the 'section' element did not make it into HTML. Everyone wanted the WYSIWYG model that worked with word processors, where it was all about how it looked rather than how it worked.
We proceeded to make HTML that started as a soup of nested tables before the responsive design people came along and made that a soup of nested div elements.
Eventually we got article, section and other elements to structure information, but, by then it was too late.
It is easy to design something that is incredibly complicated and only understood by a few. It is far harder to design something that is simple and can be understood by the many.
We definitely lost the way with this. Nowadays people only care about platforms such as social media, nobody is writing HTML the TBL way, with everything kept simple. HTML has become the specialist skill where you need lots of pigeon holes skills. Nobody on these teams of specialists have read the HTML spec and no human readable HTML gets written, even though this is fully doable.
It seems that you are one of the few that understands the original context of HTML.
> web browser was also an editor
Ummmm all the browsers I know of are also editors... Are there any that aren't?
Edit - does no one use dev tools anymore? No HTML? No vanilla JS and CSS? Everyone just using TS, React and gluing things together? Like, you literally have an entire IDE in your browser (assuming you use anything derived from Chrome, Firefox or Safari) that can code a web page live...
You're describing built-in developer tools for editing local files during development. The comment you're replying to is describing the vision of a browser which can edit remote files as part of the normal user workflow, not as a developer-only activity.
NeXT machines were hardly mass market user machines... They were almost exclusively developer machines.
Also Chrome does have stuff like SSH extensions.
That being said, some of the computing paradigms of the 80's and early 90's were very cool and I wish they caught on... Lisp machines, Smalltalk, early web ideas were interesting...
DevTools was not part of the original browsers. Firebug brought the concept to existence in the first place.
As a sidenote, does manipulating forms count as editing?
>Firebug brought the concept to existence in the first place.
There were other browser "dev tools" before firebug.
https://www.otsukare.info/2020/08/06/browser-devtools-timeli...
Crazy - I had forgotten about the earlier ones! The IE one I used at some point, and fiddler too
I still use Fiddler. I prefer it to browser inspector network tab when I need to get into the weeds. It does so much, and I can write custom proxy tweaks with Javascript (actually JScript.NET, but works just like good old JS), and it works with other software, I use it with NodeJS all the time. It's like the Swiss army knife of HTTP.
Netscape had a JavaScript debugger, IE had a debugger as well. What firebug did was pull the inspector, debugger, console, and everything together in a really nice dev experience. The goal posts were moved far back and the major browsers hurriedly released their “dev tools” to counter FireBug. Chrome being the first after, followed shortly by Safari (who already had an inspector). It would take MS another 6 years to do the same for IE8.
Netscape had editing tools. Firefox has editing tools. Chrome has editing tools. I think Safari does too?
Like, Netscape Composer came OUT of Navigator...
They are talking about a WYSIWYG editor like Netscape Composer