When it comes to slow forum content, I think it's a fool's errand to try to determine if someone is using AI for their responses. Any of the tell-tale signs of AI are easily skirted by mentioning in their prompt to not do so. It goes back to how you can't sanitize human language which has been an issue with LLM's from the beginning.

Encouraging a culture of not using AI works to an extent, but I also tire of threads claiming the parent post is AI. There isn't a sure-fire way to know one way or another.

Barriers is what makes a community; by definition, if there was truly zero barriers, there would be one ambient global pot. When barriers are eliminated, communities either erect new barriers or die.

The barrier of rejecting LLM content is a basic pre-requisite of any community of humans directly engaging with other humans in good faith.

There are always ways to achieve that: long vetting process and in-person meetings, high membership price, trusted computing verification, etc. It’s an arms race, but you only have to make it not worth it to the attacker.

Therefore, communities will either die, become much less accessible, or delegate human verification arms race to a service—most likely paid solutions provided by the very industry that is providing the products killing those communities[0].

[0] For example, see Altman’s https://en.wikipedia.org/wiki/World_(blockchain).