> I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust.
I'm happy to verify my identity as an honest-to-god sack of meat if it's done in a privacy-protecting way.
That probably is where things are gonna go, in the long run. Too hard to stop bots otherwise.
In order to make this viable, wouldn't you have to verify identity repeatedly? What's to stop me from providing a valid identity and then handing my account over to an agent after I'm verified?
That's why a web of trust was suggested. You keep track of who vouched for who and down weight those who vouch for users that prove to be bots. In theory at least. It's certainly more complicated than only that in practice.
If the web of trust only extends to the people who I actually know to be real, then that works -- but it's a very small web.
And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.
However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.
Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.
And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.
---
It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.
Critically, it doesn't have to be binary trusted/untrusted, and it doesn't have to be statically determined. If Bill vouched for you yesterday and today you are trusting a bunch of discovered bots, that would down weight the amount of trust the network has in Bill a lot more than if he vouched for you did months ago.
The question is whether we can arrive at a set of rules and heuristics and applications of the system that sufficiently incentivizes being a trustworthy member of the network.
The web of trust doesn't know that they're bots, though. It knows only that I've introduced new members. They didn't show up with tattoos across their digital foreheads that say "BOT" -- they instead came in acting just as people do.
If the bots behave themselves, then they have as much capacity to rise in rank/trust as any new well-behaved bonafide human members do.
>> That's why a web of trust was suggested. You keep track of who vouched for who and down weight those who vouch for users that prove to be bots.
Except eventually it will also weigh down those users who supported <XYZ political stance>
You could, but things would still be harder for botters.
I guess it would have to be something like a service which confirms whether a person already has an account on the site but doesn’t have to track which particular account it is.
I’m not sure if that would work for account deletions though.
That is effectively impossible though. There's data centers of stripped down phones, so "it's actually a phone" doesn't do it.
There's some work on using phone accelerometer data as a "proof of human," e.g. "move your phone in a figure eight," which I guess machines can't quite do in a human enough way yet.
What's stoping bots to verify identity? This will not work, especially with frequent data breaches.