I wonder if LLM analysis could help with moderation automation if well implemented. It can still be human-in-the-loop and you need to apply it tastefully (!!!), i.e. not letting just the most hardcore dogmatists discuss in some extremist group, but those are another issue entirely in some sense. Also, beware malicious users wasting tokens.