For the "prove the server doesn't touch the data" problem — the realistic path today is probably reproducible builds + published bundle hashes.
Concretely: the sdocs.dev JS bundle should be byte-for-byte reproducible
from a clean checkout at a given commit. You publish { gitSha, bundleSha256 }
on the landing. Users (or agents) can compute the hash of what their browser
actually loaded (DevTools → Sources → Save As → sha256) and compare.
That closes the "we swapped the JS after deploy" gap. It doesn't close
"we swapped it between the verification moment and now" — SRI for SPA
entrypoints is still not really a thing. That layer is on browser vendors.
The "two agents review every merge" idea upthread is creative, but I worry
that once the check is automated people stop reading what's actually
verified. A dumb published hash is harder to fake without getting caught.
(FWIW, working on a similar trust problem from the other end — a CLI + phone
app that relays AI agent I/O between a dev's machine and their phone
[codeagent-mobile.com]. "Your code never leaves your machine" is easy to
say, genuinely hard to prove.)
My solution is now live at https://sdocs.dev/trust. Open to feedback
That's basically exactly what I'm working on now actually. We will let you compare all the publicly served files with their hashes on github
Ya. I could imagine a browser extension performing some form of verification loop for simpler webpages. Maybe too niche.