Updating the internal knowledge is not the primary motivator here, as you can easily, and more reliably (less hallucination), get that information at inference stage (through web search tool).

They're training new models because the (software) technology keeps improving, (proprietary) data sets keep improving (through a lot of manual labelling but also synthetic data generation), and in general researchers have better understanding of what's important when it comes to LLMs.