From what I have seen, the "solution" to moderation troubles in decentralized solutions has mostly always been... more centralization.

The only major decentralized forms of social media that I'm aware of that have confronted philosophical questions of how handle moderation have been Mastodon, Bluesky, and arguably Lemmy.

I don't know of any sense in which Mastodon has increased centralization, I think its blocking tools have been distributed essentially since the beginning, not something that has iterated toward centralization over time in response to an unfolding debate. Although it does have a complicated history and as possible that new things have happened I'm not aware of.

BlueSky though, to your point, is a good example of centralization not being reliable in terms of not being accountable to users. Or for a different way of saying the same thing, the lack of accountability has served to reveal how centralized it truly is.

It does seem to be simple enough that people don't get confused about using it, but it doesn't seem to walk the actual walk of decentralization.

The big example that comes to my mind is Matrix, where most homeservers use Mjolnir to apply centralized public blocklists of other servers/people they don't like.

So if for example #archlinux disagrees with your opinion and they decide to ban you for it, you are now banned from many other unrelated channels.

I have also seen subreddits that auto-ban users that have ever posted in specific other (unrelated) subreddits.

Mjolnir is designed to apply decentralised public blocklists - i.e. you pick which banlists to apply; there are a bunch published by different people (matrix.org, the matrix 'community moderation effort', etc). Admittedly moderators do share lists (so that if #archlinux bans you, others might pick up the ban), but there's no intrinsic centralisation.

https://github.com/matrix-org/matrix-doc/blob/msc2313/propos... is how it works fwiw.

BlueSky also has this, but it works poorly because people use blocklists as a form of harassment, or else take over existing lists and add their enemies to it.

And bluesky has the ultimate power of choosing which blocklists to show you

Eh, not really. It really is decentralized, you can find them on Google. They could stop respecting a certain list but I think you can host your own AppView and get it back?

No one is successfully hosting their own AppView at this point. Blacksky had one but had to roll it back. Northsky has it as their priority, but they don't have one yet.

I mean when lists are opt-in and users have to specifically choose what lists to subscribe to then well… kinda tough shit if you end up on one that many people are happy with. That's the point of putting moderation in the hands of users, they're allowed to block you for any number of weird reasons.

It sucks when server operators group together because it effectively becomes a centralized moderation team that makes decisions for users with tenuous implicit consent but user moderation lists aren't that.

Not really, because there's only one AppView and you can't unsubscribe from the lists they dislike.

Wheras with Matrix, you can run your own server, with your own rules, and federate with the rest of the network.

ML could potentially provide a decentralized mechanism for filtering unwanted content, or even hybrid approaches. IMO the worst of content filtering (gore or other psychologically disturbing content) will soon be an automated job.

It already is and has been for a long time. The kinds of content you have to moderate are not things anyone wants to look at. The worst ones are far worse than you can imagine and the average ones are nudes of people you don't want to look at all day.

This is actually today's controversy on Bluesky because the #1 attribute of its power users is they're terrified of "AI" and the idea that "companies will steal their posts to generate AI slop", which means they think the ML moderation is stealing their posts.

Oh, but the ML can't be decentralized because the training datasets are illegal.

Not necessarily.

Centralization eventually ends up with a single entity in charge of everything, which eventually does (or doesn't do) something that causes it's value to collapse.

The real solution here is federalization: A bunch of independent self-govening entities that co-operate with other entities to assist each other in moderation.

A good non-social network example here would be adblockers.

- Each adblocker uses at least one ad tracking list, with most adblockers allowing for multiple lists to be used and a sensible default for their own users to use.

- Each list has it's own moderators that add/update/remove entries on their list based on their own values.

- Adblockers (and their users) can collaborate on requesting changes to lists, resulting in faster reactions to advertising changes on the web, and in turn faster updates passed down to users of those adblockers who participate.

- If an adblocker can't do their job anymore (e.g. their owners/workers can't do their job anymore, the owner sells out, etc...) users can switch to (or create) a new adblocker.

- If a list fails, adblockers can switch to other lists (or create a new one).

No adblocker and no list holds all the power. Adblocking as a whole is strengthened by always having viable alternatives that can be switched to, and methods to quickly create new alternatives if the need arises.

That's the power of federation: the strengths of centralization without the weaknesses.

The social media version of a federated twitter is mastodon. A whole bunch of groups running their own mastodon servers that can interact with each-other as if they were a centralized mastodon website, with similarly aligned servers sharing co-operatively maintained bad-actor lists.

Old internet was most decentralized but since the platforms weren't scaling up to ridiculous heights moderation wasn't that big of a deal. It was also "gatekept" in a self-selecting way; now, everyone is online. Conspiracy beliefs have drastically shot up in adoption, through social media exposure.

People always had irrational populist and conspiratorial beliefs, but that was mediated by popular media generally not platforming kooks. Now you have the top 10 podcasts allowing people to mainline validation for conspiracies.

I don't see how centralization helps. Allowing (or demanding) that a media provider to regulate more could lead to less platforming for conspiracy theorists and populists.