ScholarlyArticle: "Antislop: A Comprehensive Framework for Identifying and Eliminating Repetitive Patterns in Language Models" (2025) https://arxiv.org/abs/2510.15061 :

> Abstract: [...] Our approach combines three innovations: (1) The Antislop Sampler, which uses backtracking to suppress unwanted strings at inference time without destroying vocabulary; (2) An automated pipeline that profiles model-specific slop against human baselines and generates training data; (3) Final Token Preference Optimization (FTPO), a novel fine-tuning method that operates on individual tokens, surgically adjusting logits wherever a banned pattern has appeared in an inference trace.

From https://news.ycombinator.com/item?id=45546037#45585680 , an additional potential method:

>> Could build a simple heuristic: if similar memory content gets created/updated N times within short timeframe, flag it as potential loop