Given the number of academic papers about it, model collapse is a popular idea among the people who know a lot about AI as well.

Model collapse is something demonstrated when models are recursively trained largely or entirely on their own output. Given most training data is still generated or edited by humans or synthetic, I'm not entirely certain why one would expect to see evidence of model collapse happening right now, but to dismiss it as something that can't happen in the real world seems a bit premature.

We've found in what conditions does model collapse happen slower or fails to happen altogether. Basically all of them are met in real world datasets. I do not expect that to change.