Doesn't this apply only to the toy AGI constructed for these examples which consists of an LLM and some prompt that generates infinite "analysis"?
It just seems like the consequences of simply setting an LLM with a fixed response length would be wildly different.