IMO: trust-based systems only work if they carry risk. Your own score should be linked to the people you "vouch for" or "denounce".

This is similar to real life: if you vouch for someone (in business for example), and they scam them, your own reputation suffers. So vouching carries risk. Similarly, if you going around someone is unreliable, but people find out they actually aren't, your reputation also suffers. If vouching or denouncing become free, it will become too easy to weaponize.

Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.

> Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.

Good reason to be careful. Maybe there's a bit of an upside to: if you vouch for someone who does good work, then you get a little boost too. It's how personal relationships work anyway.

----------

I'm pretty skeptical of all things cryptocurrency, but I've wondered if something like this would be an actually good use case of blockchain tech…

> I'm pretty skeptical of all things cryptocurrency, but I've wondered if something like this would be an actually good use case of blockchain tech…

So the really funny thing here is the first bitcoin exchange had a Web of Trust system, and while it had it's flaws IT WORKED PRETTY WELL. It used GPG and later on bitcoin signatures. Nobody talks about it unless they were there but the system is still online. Keep in mind, this was used before centralized exchanges and regulation. It did not use a blockchain to store ratings.

As a new trader, you basically could not do trades in their OTC channel without going through traders that specialized in new people coming in. Sock accounts could rate each other, but when you checked to see if one of those scammers were trustworthy, they would have no level-2 trust since none of the regular traders had positive ratings of them.

Here's a link to the system: https://bitcoin-otc.com/trust.php (on IRC, you would use a bot called gribble to authenticate)

Biggest issue was always the fiat transfers.

If we want to make it extremely complex, wasteful, and unusable for 99% of people, then sure, put it on the blockchain. Then we can write tooling and agents in Rust with sandboxes created via Nix to have LLMs maintain the web of trust by writing Haskell and OCaml.

Well done, you managed to tie Rust, Nix, Haskell and OCaml to "extremely complex, wasteful, and unusable"

Zig can fix this, I'm sure.

I don't think that trust is easily transferable between projects, and tracking "karma" or "reputation" as a simple number in this file would be technically easy. But how much should the "karma" value change form different actions? It's really hard to formalize efficiently. The web of trust, with all intricacies, in small communities fits well into participants' heads. This tool is definitely for reasonably small "core" communities handling a larger stream of drive-by / infrequent contributors.

> I don't think that trust is easily transferable between projects

Not easily, but I could imagine a project deciding to trust (to some degree) people vouched for by another project whose judgement they trust. Or, conversely, denouncing those endorsed by a project whose judgement they don't trust.

In general, it seems like a web of trust could cross projects in various ways.

Ethos is already building something similar, but starting with a focus on reputation within the crypto ecosystem (which I think most can agree is an understandable place to begin)

https://www.ethos.network/

I'm unconvinced, to my possibly-undercaffeinated mind, the string of 3 posts reads like this:

- a problem already solved in TFA (you vouching for someone eventually denounced doesn't prevent you from being denounced, you can totally do it)

- a per-repo, or worse, global, blockchain to solve incrementing and decrementing integers (vouch vs. denounce)

- a lack of understanding that automated global scoring systems are an abuse vector and something people will avoid. (c.f. Black Mirror and social credit scores in China)

Sounds like a black mirror episode.

isnt that like literally the plot in one of the episodes? where they get a x out of 5 rating that is always visble.

Look at ERC-8004

> Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.

The same as when you vouch for your company to hire someone - because you will benefit from their help.

I think your suggestion is a good one.

> Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.

Maybe your own vouch score goes up when someone you vouched for contributes to a project?

Think Epstein but in code. Everyone would vouch for him as he’s hyper connected. So he’d get a free pass all the way. Until all blows in our faces and all that vouched for him now gets flagged. The main issue is that can take 10-20 years for it to blow up.

Then you have introverts that can be good but have no connections and won’t be able to get in.

So you’re kind of selecting for connected and good people.

Excellent point. Currently HN accounts get much higher scores if they contribute content, than if they make valuable comments. Those should be two separate scores. Instead, accounts with really good advice have lower scores than accounts that have just automated re-posting of content from elsewhere to HN.

Fair (and you’re basically describing the xz hack; vouching is done for online identities and not the people behind them).

Even with that risk I think a reputation based WoT is preferable to most alternatives. Put another way: in the current Wild West, there’s no way to identify, or track, or impose opportunity costs on transacting with (committing or using commits by) “Epstein but in code”.

But the blowback is still there. The Epstein saga has and will continue to fragment and discipline the elite. Most people probably do genuinely regret associating with him. Noam Chomsky's credibility and legacy is permanently marred, for example.

> trust-based systems only work if they carry risk. Your own score should be linked to the people you "vouch for" or "denounce"

This is a graph search. If the person you’re evaluating vouches for people those you vouch for denounce, then even if they aren’t denounced per se, you have gained information about how trustworthy you would find that person. (Same in reverse. If they vouch for people who your vouchers vouch for, that indirectly suggests trust even if they aren’t directly vouched for.)

I've been thinking in a similar space lately, about how a "parallel web" could look like.

One of my (admittedly half baked) ideas was a vouching similar with real world or physical incentives. Basically signing up requires someone vouching, similar to this one where there is actual physical interaction between the two. But I want to take it even further -- when you signup your real life details are "escrowed" in the system (somehow), and when you do something bad enough for a permaban+, you will get doxxed.