I don’t think it is true that the perfect LLM emulates a human perfectly. LLMs are language models, whose purpose is to entertain and solve problems. Yes, they do that by imitating human text at first, but that’s merely a shortcut to enable them to perform well. Making money via maximizing their goal (entertain and solve problems) will eventually entail self-training on tasks to perform superhumanly on these tasks. This seems clearly possible for math and coding, and it remains an open question about what approaches will work for other domains.

In a sense GPT-4 is self-training already, in that it's bringing in money for OpenAI which is being spent on training further iterations. (this is a joke)