URL data sites are always very cool to me. The offline service worker part is great.

The analytics[1] is incredible. Thank you for sharing (and explaining)! I love this implementation.

I'm a little confused about the privacy mention. Maybe the fragment data isn't passed but that's not a particularly strong guarantee. The javascript still has access so privacy is just a promise as far as I can tell.

Am I misunderstanding something and is there a stronger mechanism in browsers preserving the fragment data's isolation? Or is there some way to prove a url is running a github repo without modification?

[1]:https://sdocs.dev/analytics

Thanks for the kind words re the analytics!

You are right re privacy. It is possible to go from url hash -> parse -> server (that’s not what SDocs does to be clear).

I’ve been thinking about how to prove our privacy mechanism. The idea I have in my head at the moment is to have 2+ established coding agents review the code after every merge to the codebase and to provide a signal (maybe visible in the footer) that, according to them it is secure and the check was made after the latest merge. Maybe overkill?! Or maybe a new way to “prove” things?? If you have other ideas please let me know.

How about simply making the website an app and have it load your makedown file with a button and file browser. Just like e.g. https://app.diagrams.net/

And I believe you can then tell the browser that you need no network communication at that point. And a user can double check that.

No, I don't have any good ideas. Just hoping someone else does, or that I'm missing something.

I think it's in the hands of browser vendors.

The agent review a la socket.dev probably doesn't address all the gaps. I think you're already doing about as much as you reasonably can.

Thanks. The question has made me wonder about the value of some sort of real time verification service.

If it's possible to isolate that part of the code, and essentially freeze it for long periods. At least people would know it wasn't being tweaked under them all the time.

That is my half of a bad idea.

I have something coming out soon (just working on it). Your client (browser) has hashing algos built into it. So the browser can run a hash of all the front end assets it serves. Every commit merged into main will cause a hash of all the public files to be generated. We will allow you to compare the hashes of the front end files in your browser with the hashes from the public GH project. Interested to know what you think...

That sounds like a good idea. Any step toward transparent security is a good one.

For the "prove the server doesn't touch the data" problem — the realistic path today is probably reproducible builds + published bundle hashes.

  Concretely: the sdocs.dev JS bundle should be byte-for-byte reproducible                                                                                                         
  from a clean checkout at a given commit. You publish { gitSha, bundleSha256 }
  on the landing. Users (or agents) can compute the hash of what their browser                                                                                                     
  actually loaded (DevTools → Sources → Save As → sha256) and compare.                                                                                                             
   
  That closes the "we swapped the JS after deploy" gap. It doesn't close                                                                                                           
  "we swapped it between the verification moment and now" — SRI for SPA
  entrypoints is still not really a thing. That layer is on browser vendors.                                                                                                       
                                                                                                                                                                                   
  The "two agents review every merge" idea upthread is creative, but I worry                                                                                                       
  that once the check is automated people stop reading what's actually                                                                                                             
  verified. A dumb published hash is harder to fake without getting caught.                                                                                                        
                                                                                                                                                                                   
  (FWIW, working on a similar trust problem from the other end — a CLI + phone                                                                                                     
  app that relays AI agent I/O between a dev's machine and their phone                                                                                                             
  [codeagent-mobile.com]. "Your code never leaves your machine" is easy to                                                                                                         
  say, genuinely hard to prove.)

My solution is now live at https://sdocs.dev/trust. Open to feedback

That's basically exactly what I'm working on now actually. We will let you compare all the publicly served files with their hashes on github

Ya. I could imagine a browser extension performing some form of verification loop for simpler webpages. Maybe too niche.