The bot problem cannot be solved. Even if you strongly authenticate, people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future. Build your identity and reputation autonomously with the benefits that come with that.

This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.

The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.

[1] https://en.wikipedia.org/wiki/Dead_Internet_theory

The bot problem can be solved.

Anubis is one such answer [0]. Cryptocurrency and micro transactions are another.

In the last few decades, spam was a problem because the marginal transaction costs of information exchange were orders of magnitude lower than they had been. Note that physical mail spam was, and still, is an issue. Focusing on perceptual or fuzzy computation as the limiting factor, through captchas and other 'human tests', allowed for most spam to be effectively mitigated.

Now that intelligence is becoming orders of magnitude cheaper, perceptual computation challenges no longer work, but we can still do computation challenges in the form of proof of work or proxies thereof. Spam will never wholly go away but we can at least cause more friction by charging bot networks to execute in the form of energy or money.

[0] https://github.com/TecharoHQ/anubis

I don't see how Anubis solves anything. If a human lets the bot control a completely vanilla computer (which there is now a lot of tooling for), then how is it going to stop that?

Indeed - the future is RL meet-ups and small, intimate online communities.

Perhaps not the worst thing in the world?

This is the optimistic take I’ve held.

Bots get so good that they become indistinguishable from humans. If that’s true then it doesn’t actually matter if your community is all bots. But it does matter because authenticity matters to humans. They will seek authenticity where they can successfully sense it, which will be in-person.

Human simulacrums will one day cause a repeat of this issue. Then we’ll have a whole Blade Runner 2049 issue about what exactly is authenticity?

Counterpoint: https://reddit.com/r/MyBoyfriendIsAI/

People will prefer the bots that give them head pats and tell them they're so smart and that they love them

I don't necessarily think that is a stop-gap against people socializing more offline/being socially productive online.

Especially considering the fact that it seems more the case that the bigger stop-gap is what we already have:

In asian (especially Japan) it's host(ess) clubs.

Globally for friends it's influencers exploiting loneliness.

Those are things I think has to go for people to embrace offline socialization or using their online time better.

> Perhaps not the worst thing in the world?

Definitely not. “Terminally online” is as deleterious as it sounds.

Yeah, you're completely right. Maybe this will be the impetus a lot of people need to detach from online.

"content creators" https://fgiesen.wordpress.com/2025/07/06/content-creator/

It's the same freelance advertisers who optimistically refer to themselves as "influencers".

The word "content" is gross.

"Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...

The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.

"Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.

Since it's OnlyFans, I'd think something like "porn stars" or "online girlfriends"

If it were YouTube, "YouTuber" is a start, but you could also be a "YouTube science communicator" or something

Creator is a fine word to use in place of YouTuber. And vice versa.

But what do you call their output?

What do you call an illustrator's output? A photographer? What about when all of that shows up on your feed collectively?

Content is a gross word.

Creations?

> people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future.

Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.

You kinda skipped the bit I wrote alongside this about strong authentication. There are numerous ways to do this. For example, in Finland you have to physically identify yourself to open a bank account and you can then use that to authenticate. It's used for all public sector services and a few others with strict accreditation.

The issue is that it solves nothing if you can't distinguish between text that is written by AI and isn't, regardless of strong authentication.

There is the other side of this too: Real people - fake posts.

So, you have other folks on here already saying that the code their bots write is better than their own, right?

How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.

So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.

This is exactly right. The problem is the friction that this kind of system adds.

Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...

The ability to make a new account is an important defense against abusive bans. You don't want it to be possible for Google to unperson you.

I've talked about this on here before, but we think the solution is an auth layer built on top of credit score through an intermediary like creditkarma. The score itself doesn't really matter but it does solve big problems.

Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.

But yes, I think your conclusion is correct. This is the only way.

How is that creditkarma accumulated? By other "users"? Does the intermediary guarantee, the this account is a valid person now and always, and not sold the account or not stolen? I mean, we will always need some middlemen I guess?

IMO this is inevitable. HN is freaking about about the end of the anonymous internet, but it's already over and we're just figuring it out. Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.

> Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.

How do you figure? If these bots are driven by commercial interests that seems an unlikely outcome.

I'd rather have a system where there's a small investment cost to making an account, but you could always make another.

Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."

Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.

What does it matter? If there is incentive enough people will just pay and let their bot act on their behalf.

Something Awful made you pay $10 for an account. Directly to the forum. If you got banned you could pay another $10 to try again. Somehow this didn't lead to that bad incentives even though you'd think it would.

Ban reason and the moderator name were public on Something Awful, which allowed the community to respond (actively or passively), and for more senior moderators/admin to take public action against rogue moderators. The transparent audit trail countered the incentive to ban somewhat, but a lot of people also treating getting banned as a game.

Did they ban for this rule often?

"Am I making a post which is either funny, informative, or interesting on any level?

I hate how Reddit mods ban any post they don't like as being 'low effort / shit / spam' when it is completely vague.

Lemmy is even worse on the moderation front, even with public logs: https://a.imagem.app/G3R9xb.png

Lemmy isn't simply Lemmy since it's federated. A screenshot like this is somewhat meaningless without specifying on which instance this happened. There are instances with very lax or even no moderation at all.

For the majority of large, well-federated instances, I don't think it's meaningless, because deletions also propagate to other instances.

If a mod on one server doesn't like something I say, and they delete my comment, all the other (well-behaved) federated instances will also delete my comment.

Of course this also creates problems in the other direction, like servers that ignore deletion requests.

That combined with a large amount of blocked instances across the board, I feel like you get into this "which direction would you like to piss into the wind" situation where you have no idea how many people/instances will actually see your message if at all.

I’d love something like this implemented for email.

Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).

Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.

You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.

This would nuke spam economics and be minimally disruptive for other use cases of email IMO…

>transactional emails from various services that you’ve signed up for

These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.

When Digg restarted, you had to pay $5 to create an accoun

Do you think there is a price point that locks out spammers without locking out poor people?

probably not, the problem is that spammers/scammers are looking for whales, and if you are talking about draining the retirement accounts of an American who's been saving all their life, that's quite a big payout in the six or seven figures.

In the case of the 415 scams I used to ask “who would expect $20M to fall out of the sky?” The obvious answer is “someone who already had $20M fall out of the sky”

[dead]

The bot problem can easily be solved. It’s just that no one likes the cure. Think about this for a minute: what would happen if you had a country where all its citizens could act anonymously with no consequences, no reputation, no repercussions, and no trace? Would you want to go there? Live there? No, because it would be a lawless wasteland dominated by the worst of the worst.

Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.

You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.

No, it’s not the internet we all wanted, but humanity has ruined the one we have.

I can imagine a "anonymity" or "reputation" filter attached to every interaction in the internet. Enabled by default, but you can disable safe mode and see bots having fun.

Also for me problem is not in the anonymity itself, but in the lack of reputation. If I have a signal that entity can be trusted, I don't care much about its real identity.

I think this is a great way to frame the conversation and possible solution: reputation. things like accumulated karma or credits and IRL connections (big data will love this) all begin to feel dystopian whereas reputation I believe is something that everybody can get behind. It can absolutely remain anonymous, while still benefiting from IRL meetups for big reputation bumps (just use your handle). We all hang out in lots of places online, let that rep build and be used everywhere. Pretty sure they were trying to do something like this in the fedverse but haven't touched base on it in a long time ...

>The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity.

Anonymity is not the problem though. We've gone with anonymity for a long while and it has worked fine. Would a removal of anonymity suddenly fix all this? No, absolutely not. Astroturfing and PR campaigns happened before AI comments were a concern, same as bad actors.

The problem here is the "recent" development of trusting whatever you read online. Of insisting that content should be personal, trustable and real, when none of this can ever be ensured. The separate, but related problem of engagement-based economy makes it way worse.

And remember: social media sites don't actually want to get rid of bots, for the most part. That's not in their interest, as long as bots increase engagement, does anyone trust them to actively hurt their bottom line in order to promote honest, productive discourse? Please.

So you are missing something here. Up until recently IRL was anonymous by the nature of capturing all that data of what people are doing was expensive and difficult to process. Cameras weren't everywhere either.

If you lie to me in the real world, I know what you look like and won’t trust you again. You cannot change your face. If you punch me in the real world, I can punch you back. If you stab me in the real world, you’re likely going to jail once the police catch up to you. You don’t do any of those things because the lack of anonymity imparts consequence. There is no anonymity in the real world unless you run around in a full face mask, in which case no one will trust you anyways.

I suppose reshaping the fundamental social contract with the internet and the computers we use to access them would solve the problem.