Users already proven to be trustworthy in one project can automatically be assumed trustworthy in another project, and so on.

I get the spirit of this project is to increase safety, but if the above social contract actually becomes prevalent this seems like a net loss. It establishes an exploitable path for supply-chain attacks: attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects). If this sort of cross project trust ever becomes automated then any account that was ever trusted anywhere suddenly becomes an attractive target for account takeover attacks. I think a pure distrust list would be a much safer place to start.

Based on the description, I suspect the main goal isn't "trust" in the security sense, it's essentially a spam filter against low quality AI "contributions" that would consume all available review resources without providing corresponding net-positive value.

Per the readme:

> Unfortunately, the landscape has changed particularly with the advent of AI tools that allow people to trivially create plausible-looking but extremely low-quality contributions with little to no true understanding. Contributors can no longer be trusted based on the minimal barrier to entry to simply submit a change... So, let's move to an explicit trust model where trusted individuals can vouch for others, and those vouched individuals can then contribute.

And per https://github.com/mitchellh/vouch/blob/main/CONTRIBUTING.md :

> If you aren't vouched, any pull requests you open will be automatically closed. This system exists because open source works on a system of trust, and AI has unfortunately made it so we can no longer trust-by-default because it makes it too trivial to generate plausible-looking but actually low-quality contributions.

===

Looking at the closed PRs of this very project immediately shows https://github.com/mitchellh/vouch/pull/28 - which, true to form, is an AI generated PR that might have been tested and thought through by the submitter, but might not have been! The type of thing that can frustrate maintainers, for sure.

But how do you bootstrap a vouch-list without becoming hostile to new contributors? This seems like a quick way for a project to become insular/isolationist. The idea that projects could scrape/pull each others' vouch-lists just makes that a larger but equally insular community. I've seen well-intentioned prior art in other communities that's become downright toxic from this dynamic.

So, if the goal of this project is to find creative solutions to that problem, shouldn't it avoid dogfooding its own most extreme policy of rejecting PRs out of hand, lest it miss a contribution that suggests a real innovation?

I think this fear is overblown. What Vouch protects against is ultimately up to the downstream but generally its simply gated access to participate at all. It doesn't give you the right to push code or anything; normal review processes exist after. It's just gating the privilege to even request a code review.

Its just a layer to minimize noise.

And then they become distrusted and BOOM trust goes away from every project that subscribed to the same source.

Think of this like a spam filter, not a "I met this person live and we signed each other's PGP keys" -level of trust.

It's not there to prevent long-con supply chain attacks by state level actors, it's there to keep Mr Slopinator 9000 from creating thousands of overly verbose useless pull requests on projects.

That is indeed a weakness of Web of Trust.

Thing is, this system isn't supposed to be perfect. It is supposed to be better, while worth the hassle.

I doubt I'll get vouched anywhere (tho IMO it depends on context), but I firmly believe humanity (including me) will benefit from this system. And if you aren't a bad actor with bad intentions, I believe you will, too.

Only side effect is genuine contributors who aren't popular / in the know need to put in a little bit more effort. But again, that is part of worth the hassle. I'll take it for granted.

> attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects).

Well, yea, I guess? That's pretty much how the whole system already works: if you're an attacker who's willing to spend a long time doing helpful beneficial work for projects, you're building a reputation that you can then abuse later until people notice you've gone bad.

This feels a bit https://xkcd.com/810/

Yeah, as that's a different problem unrelated to the problem that this is trying to solve.

It's just an example of what you can do, not a global feature that will be mandatory. If I trust someone on one of my projects, why wouldn't I want to trust them on others?