I run a niche creative community, and we outlawed AI-generated content in 2022 as it was easy to see how corrosive it would be to the community.
It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.
It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.
I fear losing the battle.
High quality anecdata are exactly the reason why I love HN. Thanks for posting about it.
First, how do you identify them? Is it strictly admins monitoring posts/server-side logs or do users report odd behaviour?Second, what is the purpose of these accounts? Are they basically running submarine adverts, or are they just trolling (to harm the community)?
Eagerness.. you let them work through a endless labyrinth of forms. While the human drops out after the 2nd form- the ai is willing to go all the way always.
Don't you lose really users by making it easy for bots to complete a sign up, but hard for humans?
Maybe 5 required fields and 30 optional fields. Apart from "census dweebs" [1] I don't know of many people that fill in more than the required.
[1]: https://thebeaverton.com/2026/05/local-nerd-disappointed-he-...
A starting point for study;
AI Deception: A Survey of Examples, Risks, and Potential Solutions - https://arxiv.org/abs/2308.14752
Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective - https://arxiv.org/abs/2406.05724
Some background (pre-AI) ;
Online Deception in Social Media - https://cacm.acm.org/research/online-deception-in-social-med...
It realy is time for a Butlerian Jihad
Way back whenever I first read Dune, this seemed like such weird niche ban. I don't think I had a lot of respect for it.
Now, like all good SciFi, it seems fairly prescient ....
The trick is to work on site and pour the counter-measures right into the foundation :P
There will soon be a point when you cannot even tell if the post comes from a human or AI model.
I always find the most difficult is to deal with users who are not actively breaking any rules but are toxic and hurting the conversation. Given your experience, what's your take on handling this?
Ban them. I also recommend making it explicit: your last rule should be “moderators have final discretion”.
The hard part is figuring out when to apply rules leniently or strictly. For example, I think users should always be warned, then given shorter bans, before a long or perma ban (unless they break an obvious egregious rule, or they’re a suspected alt of a banned user). At the same time, you do need to kick out persistent rules lawyerers or they’ll drive away good users.
Moderators shape their community. Hacker News is decent (compared to what it could be) because of dang and tomhow. Many communities (especially on Reddit) have become echo chambers, mainly because the mods are too happy to ban people for unspoken and/or vague rules.
In my experience it's okay to have some troll or odd users. If they do targeted campaigns against single users or small groups we ban them.
Unlike a lot of communities, yours at least started on the correct side. Better to ban outright, than to slowly realize that you should have banned it.
When it comes to slow forum content, I think it's a fool's errand to try to determine if someone is using AI for their responses. Any of the tell-tale signs of AI are easily skirted by mentioning in their prompt to not do so. It goes back to how you can't sanitize human language which has been an issue with LLM's from the beginning.
Encouraging a culture of not using AI works to an extent, but I also tire of threads claiming the parent post is AI. There isn't a sure-fire way to know one way or another.
Barriers is what makes a community; by definition, if there was truly zero barriers, there would be one ambient global pot. When barriers are eliminated, communities either erect new barriers or die.
The barrier of rejecting LLM content is a basic pre-requisite of any community of humans directly engaging with other humans in good faith.
There are always ways to achieve that: long vetting process and in-person meetings, high membership price, trusted computing verification, etc. It’s an arms race, but you only have to make it not worth it to the attacker.
Therefore, communities will either die, become much less accessible, or delegate human verification arms race to a service—most likely paid solutions provided by the very industry that is providing the products killing those communities[0].
[0] For example, see Altman’s https://en.wikipedia.org/wiki/World_(blockchain).
Indeed. Take a soft approach, or "wait and see", and you'll just allow your community to get infested with slop enthusiast crybullies that loudly protest any pushback against "genai content". The communities that draw a firm line and hold it will be the only ones that endure.
It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.
We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.
No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.
Now I see it as the perfect tool for impostors.
>It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.
People often confuse freedom of speech, with freedom to access a specific platform for speech.
Its dead wrong, I dont know why people would want to be in a community where they arent wanted.
> I dont know why people would want to be in a community where they arent wanted.
This is standard predatory behavior. Child abusers hanging out with kids, weirdos hanging out near the women's clothing department, etc.
It's usually a clear indication of the sort of people you don't want to associate with in your online community. They bring a net negative to the table.
What is "it"? Putting the two halves together, the sort of people who want to be in a community where they aren't wanted are the sort of people you don't want in that community. I guess I can't argue with that.
I have a similar problem in a community I'm a a part of? How are you reliably detecting AI?
Hmm i'm curious how niche.
Or ... how small can a community be and still be drowned in AI slop?
Is it a community inside one of the major platforms, or it has its custom thing?
What about charging $1 or $5 for an account? Seems like you could stem the tide pretty easily with something like that.
Or applying for an account could involve sending a handwritten letter by post.
Adding that much friction is also going to loose you many genuine users. Might be worth it depending on the community but if it makes newcomers fewer than your usual churn rate its a death sentence.
I think a better approach would be to upload an image to the sign up form with explicit instructions that isn't in text on the page to be easily read by AI. Like "Answer 2 in the Math captcha" even though the correct answer is 5, the AI will always enter the wrong number.
True. To be fair, this was a reply to a comment about a niche community.
Though it's possible that this approach could become normalised if it's proven to work at a small scale.
This is fucked and I hate it. Internet is (was?) about convenience and direct access. I understand there are challenges that need solutions, but this ain’t it
> Internet is (was?) about convenience and direct access.
Was.
Maybe you are to young to remember the (pre-spam) days when it was polite to leave your SMTP server open for others to use?
> was
Yep. Was.
This isn’t the internet you grow up on. This is an internet scoped for bots and organizations.
Right, because I cannot possibly purchase a thousand such letters for less than the cost of minimum wage for an hour or two.
Where I live a 2nd class stamp costs the equivalent of $1.24. That's $1240 for a thousand.
Not including the cost of the letter itself, or the envelope, or the cost to write it if it's being farmed out to overseas labour, who then has to send it by international postage. And then you have evidence of where the letter originated, and that can be compared with how the user presents themselves online.
Little bit more than 2 hours minimum wage I think.
Slop as a letter is a thing already https://www.axidraw.com/
If someone buys or makes something like this just to send a letter, they a) put in more human effort than writing it themselves, and b) are probably interesting and would benefit your community.
Not that I don't take your point that such a service could exist, but the site you linked explicitly says they don't offer letter writing as a service.
Also, I imagine it's not impossible to reliably distinguish between an autopen and genuine handwriting. The company who's site you linked say their machine can't perform complex pen movements so calligraphy is impossible.
The real advantage of posting a letter is that you have to pay for postage, and the stamps on the envelope will indicate which country the letter is really coming from.
presumably most people running these bots are doing it for some financial gain. as long gain > cost the issue won't go away.
It'll stop the ones doing it for the lols, but I imagine they're a minority anyway.
Would be great to have some sort of bot trap that would just drain a dollar here and there from AI slopologists and shadowban their accounts to only interact with other AI accounts.
We bringing back Something Awful, now?
No it's Something IS awful.
Sorry, they did an interview about 20 years back were they kept correcting the host to 'Something is awful' I have just called it that ever since.
It never left.
If you head to Twitter right now, the vast majority of bots are blue checks. It seems to actually encourage the opposite, where you trusting that someone paying $8 for an account makes you even more likely to fall for slop
I'm not convinced that twitter is actually trying to get rid of bots. They're OK with boosters/scammers/propaganda/agitation mills, they just want a cut.
I think twitter is an odd-one-out here, twitter as a whole has been heading down hill ever since the acquisition, and I wouldn't be surprised if many of those blue checks are officially sanctioned bots. Especially given the way so many of them push the same narratives that Musk does at the same time he does.
I don't think so.
The people leaving LLM replies are paying minimum $20/month for LLM access, and probably more in practice.
A one time $10 fee is not a deterrent.
I think you're right. I think _merely_ paying won't deter them. But, if you couple this with banning of accounts that post AI slop, you get:
1) the cost becomes even higher for AI slop factories since they will probably get multiple accounts banned.
2) It prevents influence to accrue to any specific account. This diminishes the incentive for slop, since sufficient success means a ban.
3) It reduces the moderation effort since creating accounts is no longer a sustainable strategy.
Agreed, but I left twitter even before the right-hand-raising oligarch took over. The reason was that censorship started to kick in aka twitter staff writing me a mail that my "conduct" is not appropriate. Basically they try to reduce the "aggressiveness" in written content. Well, that's already an assumption on their part; and in any discourse with orthogonal opinions, you can not really reconcile such positions anyway, so I don't need some 20 years old from India hired by Twitter to tell me what I should or should not do (though, realistically it was a bot actually that just scanned for content). I noticed that censorship is increasing on "social" websites. Reddit as an example is a mega-censorship site - the amount of deletion by crazy mods is insane.
Bots are indeed killing twitter now. I noticed more and more were leaving permanently. Musk evidently accelerated the decay here. There is something wrong with his mindset here, it's almost as if it is pathological. His perception of things is genuinely distorted, and I am not even 100% certain he is completely aware of it; he must be partially aware, but it seems there is also something wrong with the brain. No wonder he gets along with Trump - that one now has clearly dementia narcissism in the final stage.
This does not work, for similar reasons why captchas piss off real humans.
You add a barrier here. You think that your solution means that AI is reduced, but you also reduce real humans. I noticed this with other parts too, such as "you need to verify your identity before you can post to the ruby issue tracker". I can do so, but I need my tablet and this takes me more time than before, so I stopped using the ruby issue tracker altogether. (It's not the only reason, but adding barriers really makes me invest my time elsewhere - more likely to do so at the least.)
You always need to consider all trade-offs. Charging money means you will also offset real humans at the same time. And it's not solely about the cost; it is simply a hassle to want to do so. For similar reasons I also rarely register at a phpbb forum - I need to store the password to not forget it etc... so more hassle. Using a password manager is also more of a hassle.
Yeah, I tried to sign up for instagram, but at the fourth captcha I gave up and left. How does instagram have any users with such a hostile sign-up barrier?
My profile picture is old enough to open an account on instagram
Fun fact. There is this threads twitter clone from meta. How do I login?
I "log in with Instagram", where "I log in with Facebook". Guess how well data recovery works when there is literally no password set. I'm surprised these systems work at all.
I can't access gnu.org, because their extreme measurements against the AI bots blocking my slightly older browser.
> Charging money means you will also offset real humans at the same time.
On completely different scales. Even if it not perfect, it is strong enough of a filter to turn a bot infestation into a mild annoyance.
That's an assumption. Depending on the incentives in play, the relative scale at which AI users and real humans are affected may well be the opposite of what you expect.
No, that's based on my experience running subscribers-only instances on the Fediverse (Mastodon, Lemmy, Matrix, Funkwhale) for more than 6 years now. I might not have that many users on the servers, but it's certainly a higher number than the zero spammers I ever got.
Metafilter and Something Awful both do this.
Both sites have survived and continue to work well for their users.
A small cost does definitely work for some sites.
Is SA still a thing? I had an account since... 2007? God I'm old. I miss the days when you could have a community that you could easily search for content. Nowadays everything is a discord black hole.
A lot of the "add a cost to stop bad actors" end up being a selection effect in favor of bad actors.
Sure, it might stop 10% of the bad actors and lower the numbers, but it'll stop 80% of the good users who aren't experts at getting around the cost or don't have an income from using the service to just pay it as a cost of business.
>shrug off around 600 AI content creator accounts monthly. >I fear losing the battle.
I was in a small niche creative writing community for a while. Circa 2021\22. AI wasn't why I was there but I demo'd a few LLMs to a lot of the users in the Off Topic section because people were curious. Even with an explanation of how they operated, almost everyone was at least interested. One author told me how he operated similarly, rote learning how to write like his favorite authors by copying out their texts, hand written, word for word. Their concern was largely that they were too hard to use from a technical perspective.
These people knew I was there to learn, and that I was unlikely to ever try and publish LLM derived content. I said as much often.
Sometime in late 2022, a switch was flipped. And almost all of them started talking about how AI and those who used it were unambiguously evil. They didn't say my name, but they stopped engaging with me. Gradually, they started reposting twitter content from extremely anti AI people. Complained about AI submissions to various publications. Eventually, someone reposted a tweet calling for the death of anyone who used an LLM, with not even a single disagreement (and lots of encouragement)
I just bailed. I had only ever engaged positively, answered questions for the curious and tried to help people out. I posted one AI assisted story, and that was to demonstrate how my contributions were tracked vs AI contributions automatically in the editor to satisfy someones curiosity. Clearly highlighting the bits I had written. Just a technical demo. No one was asked to enjoy or positively engage with it as if it was human written.
A while later, most of their submission rules were updated with a new clause, if it was judged that AI written content was discovered, they would blacklist that person from all submissions across their entire community. Considering I had demo'd LLMs, and the uselessness of AI detectors, it was clear to me that these people would be able to justify blacklisting me if I poked my head up at all. I had been developing my own story for submission (myself, no LLM content), but I just dropped it. I just didn't feel like sticking my neck out for the witch hunt.
I also used to be quite engaged with blockchain. And it went through a similar process, most people ignored it until that paper about the power usage (Claiming it would spike to some level it never reached) and then suddenly being associated with it was an outrageous moral crime. But after a while, when it turned out that the power use claims were largely a nothing burger, people gave up on the hate parade.
I don't think you will "Lose the battle" (at least in terms of keeping AI users out). And its always ok for small communities to be selective about their membership. I just don't think its possible to maintain such artificial rage for more than a few years. The AI Datacenter water/power claims are a clear London Horse Manure problem that looks set to resolve itself, and the copyright issues will get sorted to some degree. Eventually I think you just wont care enough to ban anyone except low effort spammers (of which there are a huge amount, granted).
YMMV
> I just don't think its possible to maintain such artificial rage for more than a few years.
What makes you think the rage is artificial?
Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Blockchain turned out to be an absolutely awful payment method, so most people only know it as 1) a way to do crimes like ransomware, 2) a get-rich-quick scam, 3) some buzzword companies threw in everything, 4) the thing that made GPUs unaffordable.
AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Their opinion about AI or blockchain most likely has absolutely nothing to do with you. They are just seeing the world noticeably get worse, and are desperately trying to protect their communities from it in any way they can.
>Their opinion about AI or blockchain most likely has absolutely nothing to do with you.
Which is why I left before I was banned. I no longer felt comfortable and they probably likewise. They wanted a safe space to hate on people involved in AI art and my leaving contributed to that. That said, I doubt I could have posted content calling for the death of authors or honestly any other group in that space without being ostracised.
Its a bit like saying "A witch might have burned down their house, so their reaction against witches is understandable" maybe in abstract. But that doesn't mean the subsequent actions are acceptable.
> Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Yeah absolutely. These people in particular, at the time, really on experienced it through 2 factors:
1. They (like many people) posted a lot of their midjourney creations for a few months. (21/22 was like that)
2. They saw an increase in low quality submissions.
So gripes about AI art and low quality submissions seem perfectly valid.
>Blockchain turned out to be an absolutely awful payment method >AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Yeah so I am not complaining about people having negative opinions, I was sort of talking about the over meme, the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time. Basically used like a thought terminating cliche. I have problems with crypto, and I like things about crypto. I can have a great conversation with most people, but for 12 months or so, you couldn't have a conversation without people loudly shouting about how the power use was going to destroy the environment and that it was going to use X% of the power by Y date. They didn't want to talk about it, they had been given evidence that the discussion was over and everything was solved in favor of their beliefs. The AI debate has now roughly arrived in the same place, there's no longer really a discussion, but the zeitgeist has this one single mode that's constantly debated. To the point where you could be running a local LLM trained only on data from the 1800s and you can still be considered to be responsible for some data centre single handedly draining a lake.
My point is, like crypto, this fixed idea will eventually erode and the hate train will move on. People with well thought out negative opinions are still going to exist past that time, they just wont have people screaming at fever pitch about it constantly.
You didn’t like the broader consensus views towards llm usage but that doesn’t mean it wasn’t ultimately a positive to their community that you left. It sounds as though there was a mismatch in what you and the broader group wanted so perhaps a non-confrontational split is the best that could be hoped for in this situation?
> They wanted a safe space to hate on people involved in AI art and my leaving contributed to that.
Once again, I have to ask, why do you think that that is what they want? Maybe they want human generated content?
> the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time.
Understandable, though. Why discuss the pros and cons of $FOO when you're drowning in it? All you want it to stop the drowning.
Genuinely dont know how this made at least 3 people angry enough to downvote but not suggest why.
I'm not angry, you just seem to be taking a very self-centered view on the general vibe in this specific forum you mentioned, and are interpreting general anti-AI/blockchain sentiment as personal attacks.
So I downvoted.
Its more like, here are the decisions I made while being in the position of being on the outside of sentiment, and the timeline of that changing sentiment.
The only thing I really took personally was the call for death, and that was me making a decision to leave in favor of my mental health.
You're a victim of the uni-cause
This is entirely vibes based on reading research on similar campaigns so I cant pull a paper with hard evidence about this specifically. But I believe chinese/North Korean infowar campaigns are behind these seeded talking points. They seed in these far left activist communities and then once they find one that sticks the real people in these communities start carrying the message out to other communities and then the CN/NK botnets amplify the messages and suppress the responses. They dont just do this on the left im just highlight left for this specific point.
Yeah, that's not it. China is heavily invested in AI and LLMs. Also this sentiment is organic, most people I talk yo about AI are anti-AI.
The exceptions to the anti-AI sentiment are management and people with a vested interest.
The battle is lost. You never had a chance. There's nothing you can do against the constant torrent of AI content that's only getting started. The online communities that we know and love are going to change and there's nothing we can do about it. You can't keep AI out of any platform no matter what the community guidelines say or even if it seems locked down with no bot access.
The only solution is in person meetups, bringing back the 3rd places, joining a club. Maybe it's not such a bad outcome.
I may have a solution for you and could use some beta testers. Please reach out to me if you don't mind. Info on my profile.
It seems this is getting downvoted, no promotion. Understood, thanks for the reminder.