Nothing I've seen from the AI labs appears to indicate that they are worried about model collapse in the slightest.

That makes sense to me, because if their models start getting worse because there's slop in the training data they can detect that and take steps to fix it.

Their entire research pipeline is about finding what makes models that score better! Why would they keep going with a technique that scored worse?

> Nothing I've seen from the AI labs appears to indicate that they are worried about model collapse in the slightest.

AI labs are insufferable hype machines, they are unlikely to sow doubt about their own business models.

> they can detect that and take steps to fix it.

Each model will need an endless diet of new content to remain relevant, and over time, avoiding ingestion of LLM output (and the accompanying inbreeding depression) will likely be a tricky proposition. Not impossible, but expensive and error-prone.