> people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future.

Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.

You kinda skipped the bit I wrote alongside this about strong authentication. There are numerous ways to do this. For example, in Finland you have to physically identify yourself to open a bank account and you can then use that to authenticate. It's used for all public sector services and a few others with strict accreditation.

The issue is that it solves nothing if you can't distinguish between text that is written by AI and isn't, regardless of strong authentication.

There is the other side of this too: Real people - fake posts.

So, you have other folks on here already saying that the code their bots write is better than their own, right?

How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.

So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.

This is exactly right. The problem is the friction that this kind of system adds.

Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...

The ability to make a new account is an important defense against abusive bans. You don't want it to be possible for Google to unperson you.

I've talked about this on here before, but we think the solution is an auth layer built on top of credit score through an intermediary like creditkarma. The score itself doesn't really matter but it does solve big problems.

Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.

But yes, I think your conclusion is correct. This is the only way.

How is that creditkarma accumulated? By other "users"? Does the intermediary guarantee, the this account is a valid person now and always, and not sold the account or not stolen? I mean, we will always need some middlemen I guess?

IMO this is inevitable. HN is freaking about about the end of the anonymous internet, but it's already over and we're just figuring it out. Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.

> Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.

How do you figure? If these bots are driven by commercial interests that seems an unlikely outcome.

I'd rather have a system where there's a small investment cost to making an account, but you could always make another.

Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."

Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.

What does it matter? If there is incentive enough people will just pay and let their bot act on their behalf.

Something Awful made you pay $10 for an account. Directly to the forum. If you got banned you could pay another $10 to try again. Somehow this didn't lead to that bad incentives even though you'd think it would.

Ban reason and the moderator name were public on Something Awful, which allowed the community to respond (actively or passively), and for more senior moderators/admin to take public action against rogue moderators. The transparent audit trail countered the incentive to ban somewhat, but a lot of people also treating getting banned as a game.

Did they ban for this rule often?

"Am I making a post which is either funny, informative, or interesting on any level?

I hate how Reddit mods ban any post they don't like as being 'low effort / shit / spam' when it is completely vague.

Lemmy is even worse on the moderation front, even with public logs: https://a.imagem.app/G3R9xb.png

Lemmy isn't simply Lemmy since it's federated. A screenshot like this is somewhat meaningless without specifying on which instance this happened. There are instances with very lax or even no moderation at all.

For the majority of large, well-federated instances, I don't think it's meaningless, because deletions also propagate to other instances.

If a mod on one server doesn't like something I say, and they delete my comment, all the other (well-behaved) federated instances will also delete my comment.

Of course this also creates problems in the other direction, like servers that ignore deletion requests.

That combined with a large amount of blocked instances across the board, I feel like you get into this "which direction would you like to piss into the wind" situation where you have no idea how many people/instances will actually see your message if at all.

I’d love something like this implemented for email.

Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).

Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.

You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.

This would nuke spam economics and be minimally disruptive for other use cases of email IMO…

>transactional emails from various services that you’ve signed up for

These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.

When Digg restarted, you had to pay $5 to create an accoun

Do you think there is a price point that locks out spammers without locking out poor people?

probably not, the problem is that spammers/scammers are looking for whales, and if you are talking about draining the retirement accounts of an American who's been saving all their life, that's quite a big payout in the six or seven figures.

In the case of the 415 scams I used to ask “who would expect $20M to fall out of the sky?” The obvious answer is “someone who already had $20M fall out of the sky”

[dead]