You definitely trust the same web of trust key graph already in every single layer of your current CI solution. Everything at Astral and by all indications also OpenAI is built with third party services, third party (blind) signing, using third party binaries signed by those 5000 keys directly or indirectly.

That web of trust is the trust foundation of the entire internet and likely every server that powers Github, Astral, and OpenAI including every CI system you described.

https://kron.fi/en/posts/stagex-web-of-trust/

One node in that graph is also nowhere near good enough to stop supply chain attacks, which is why we use -multiple- points thanks to full source bootstrapped deterministic builds.

Let me flip it and ask why anyone should trust that an Astral/OpenAI employee that does not sign their commits and does not sign their reviews, has not been impersonated or had an account takeover due to the phishable 2FA that is allowed, and won't just make a commit to CI stack for uv (or uv itself!) under a pseudonym then merge their pseudonym's code.

One person can burn it all down in spite of the practices in this blog post. Letting machines blindly sign whatever non-deterministic outputs come out of an automated process does not actually buy you much in practice against many of the supply chain attack tactics actually used in the wild. Also of course the same applies to the third party build systems you trust. Github themselves also don't use any of these basic supply chain security practices either so many many points of failure here.

Astral/OpenAI are actually giving -thousands- of randos other than the authors the ability to backdoor the uv binaries you produce, and without a reproducible full source bootstrapped build process, no one would be able to quickly or easily prove it.

To package or change uv in stagex one maintainer must sign the commit, and another must sign the review/merge commit. Then -multiple- maintainers must compile 180 bytes of human readable machine code, build up to tinycc, then gcc, then llvm, and eventually to a rust compiler, that we then use to build uv, all deterministically.

So, we actually don't trust any third parties other than the actual authors of the source code to a limited extent in our process. That said we are working on a solution for decentralized review of upstream code as well right now because we largely don't trust upstreams to not let their identities get stolen because most teams for whatever reason refuse to sign their commits and reviews, so we will have to do that for them too. Regardless, we can prove we faithfully deliver honest compilations of whatever upstream code is published without any single points of failure.

We ask users downloading binaries to trust that a bunch of maintainers are putting their personal reputations and keys (which long predate AI and are hard to impersonate) on the line to sign their bit for bit identical builds of uv, and the entire toolchain underneath it, and provide faithful compilations of upstream source code.

It would make everyone a lot safer if upstreams, especially well funded ones, could meet or exceed the threat model we must support downstream.

> You definitely trust the same web of trust key graph already in every single layer of your current CI solution. Everything at Astral and by all indications also OpenAI is built with third party services, third party (blind) signing, using third party binaries signed by those 5000 keys directly or indirectly.

I don't think we do; there are places we trust distribution signers, but we don't do so in a "web" topology; we trust them because a small set of keys is pre-baked into VMs, Docker images, etc. The web of trust, as it existed 20 years ago, is dead[1].

Topologically this is a lot like a CA ecosystem, except worse in material ways: even distros (full of talented, motivated people!) struggle to operationalize PGP, so we end up with a bunch of de facto unexpirable and irrevocable keys[2] that nobody is really tracking. Consequently, nobody is really factoring these into their security story, whether or not they're a web.

[1]: https://inversegravity.net/2019/web-of-trust-dead/

[2]: https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1461834

You can call it dead, and yet, it is the only system signing internet infrastructure at scale right now because of Debian, Suse, Fedora, Arch, Gentoo, Ubuntu, Redhat, etc. It is very much alive for those use cases that need it most. If one of those keys was compromised or the people that hold them, most of the internet is backdoored. Except for things built on stagex, in which case you would have to compromise several. Still not good enough, but better.

PGP Web of Trust for all its faults and early design facepalms (of which there are many) is the only proof-of-human system where humans meet humans and sign each others keys that we ever built before AI. No one can reasonably expect any recently created keys were not created by made up LLM identities unless signed into the web of trust by well published existing keys held by well known and trusted humans.

But even if you don't want to look at the Web of Trust you can prove the key I sign stagex releases with is mine via all sorts of other ways thanks to keyoxide: https://keyoxide.org/E90A401336C8AAA9

Also PGP specs supports modern crypto now, attestation via dns, and even hackernews. You can attest my PGP key is tied to my HN profile right now. I would agree -gpg- is dead, with no real reason to use it anymore now that we have modern rust tooling with modern crypto.

But! If someone wants to generate an ssh key on a smartcard or something and sign with that instead, we would absolutely consider it. Not married to supporting only a single spec, but we absolutely need human beings to hold their own private keys on smartcards which are themselves attested by other human held private keys and the online services shared by the same identities.

No, I call it dead because it's dead. The SKS network is dead, the strong set is moribund, and the remaining real users of PGP are instead slinging key bundles around by baking them into pre-trusted artifacts (like ISOs). But that's not a "web of trust," it's just bespoke centralized key distribution with a certification format that every single serious cryptographer agrees is terrible.

(And this is before a more brute statistical argument: even at its greatest extent, the PGP ecosystem was minuscule[1].)

[1]: https://moxie.org/2015/02/24/gpg-and-me.html

I am deeply aware of Moxies views on this, and we have talked about them at length, and he is wrong. Also SKS and GnuPG are not OpenPGP. GnuPG no longer conforms to modern OpenPGP standards and is the IE6 grade implementation that we should stop talking about and using and on that point at least moxie and I agree. I found a major CVE in gpg myself.

But regardless of tooling, it is about the keys and who holds them and who they endorse. It does not really matter how keys are distributed. It matters that keys signed other keys and that we have a way of downloading them and verifying that.

We cache a copy of all 5444 keys in the web of trust of stagex maintainers in our keys repo and you can draw a line from our keys to the keys that signed commits to the linux kernel today. These also sync and update from a dozen SKS keyservers that are still online for anyone that wants to build a key directory as we did.

Though SKS is being rapidly replaced with WKD where every domain hosts their own keys and they are automatically discovered.

Are you really going to say this has no trust or security value?

We should all just stop and let Github sign everything for us even though they don't full source bootstrap anything or sign commits or use deterministic builds?

What is the outcome you are actually arguing for here.

> It does not really matter how keys are distributed. It matters that keys signed other keys and that we have a way of downloading them and verifying that.

I think it matters if you want to call it a WoT. But also, I don't think any signatures originating from these keys are being verified usefully at any meaningful scale.

> Are you really going to say this has no trust or security value?

I think it has marginal security value, maybe net-negative if you balance it with the fact that cryptographers and cryptographic engineers have to waste time arguing against using PGP.

> What is the outcome you are actually arguing for here.

I like binary transparency. I also think identity-based signing is significantly more ergonomic, and has seen more adoption in the last 4 years than PGP has in the last 35. And I think this is actually a stunning indictment, because I'd say that identity-based signing schemes like Sigstore are still running behind my expectations.

> I think it matters if you want to call it a WoT. But also, I don't think any signatures originating from these keys are being verified usefully at any meaningful scale.

Web of trust is a web of mutually trusting keys, not a network of servers. That web can be verified on any computer as in the blog post by kron I linked earlier, and it is verified for every package install in our soon-to-be published sxctl tool we will be presenting at some conferences next month.

> I think it has marginal security value, maybe net-negative if you balance it with the fact that cryptographers and cryptographic engineers have to waste time arguing against using PGP.

So again, are you really saying all the maintainers of most services running the internet should stop using the only IETF standard built for human-identity-bound signing with keys held by those humans?

The alternative everyone seems to be suggesting with a straight face is login with github or google and let them sign for you with "keyless signing"? That is the only alternative that is gaining adoption, and it is a ridiculous downgrade. I consider it mostly security theater.

The whole point of humans holding their own signing keys locally is to be able to make it not matter if your centralized online accounts are taken over. Something that is usually easy to do because no one uses hardware 2FA or renews their personal email domains.

But, if they did use hardware 2FA, hey look they have a local signing key... why not just... sign the binaries with that hardware directly instead of using that to login and let someone else sign for you. And then if you are going to do that, you don't want to be impersonated, so why not publish those public keys, and have other maintainers sign them. And now we have re-invented the web of trust.