Unlike a lot of communities, yours at least started on the correct side. Better to ban outright, than to slowly realize that you should have banned it.
Unlike a lot of communities, yours at least started on the correct side. Better to ban outright, than to slowly realize that you should have banned it.
When it comes to slow forum content, I think it's a fool's errand to try to determine if someone is using AI for their responses. Any of the tell-tale signs of AI are easily skirted by mentioning in their prompt to not do so. It goes back to how you can't sanitize human language which has been an issue with LLM's from the beginning.
Encouraging a culture of not using AI works to an extent, but I also tire of threads claiming the parent post is AI. There isn't a sure-fire way to know one way or another.
Barriers is what makes a community; by definition, if there was truly zero barriers, there would be one ambient global pot. When barriers are eliminated, communities either erect new barriers or die.
The barrier of rejecting LLM content is a basic pre-requisite of any community of humans directly engaging with other humans in good faith.
There are always ways to achieve that: long vetting process and in-person meetings, high membership price, trusted computing verification, etc. It’s an arms race, but you only have to make it not worth it to the attacker.
Therefore, communities will either die, become much less accessible, or delegate human verification arms race to a service—most likely paid solutions provided by the very industry that is providing the products killing those communities[0].
[0] For example, see Altman’s https://en.wikipedia.org/wiki/World_(blockchain).
Indeed. Take a soft approach, or "wait and see", and you'll just allow your community to get infested with slop enthusiast crybullies that loudly protest any pushback against "genai content". The communities that draw a firm line and hold it will be the only ones that endure.
It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.
We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.
No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.
Now I see it as the perfect tool for impostors.
>It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.
People often confuse freedom of speech, with freedom to access a specific platform for speech.
Its dead wrong, I dont know why people would want to be in a community where they arent wanted.
> I dont know why people would want to be in a community where they arent wanted.
This is standard predatory behavior. Child abusers hanging out with kids, weirdos hanging out near the women's clothing department, etc.
It's usually a clear indication of the sort of people you don't want to associate with in your online community. They bring a net negative to the table.
What is "it"? Putting the two halves together, the sort of people who want to be in a community where they aren't wanted are the sort of people you don't want in that community. I guess I can't argue with that.
I have a similar problem in a community I'm a a part of? How are you reliably detecting AI?