There’s some evidence the reasoning models can improve themselves, though at a glacial pace. Perhaps the stuff they’re all keeping under wraps and just drop hints every now and then is scarier than you’d expect. (Google recently said the AI is already improving itself.)
Hyperparameter optimization in the 20th century was AI improving itself. Even more basic, gradient descent is a form of AI improving itself. The statement implies something that is more impressive than what it may potentially mean. Far more detail would be necessary to evaluate how impressive the claim is.
https://ai-2027.com/ has a much more in depth thought experiment, but I’m thinking AI which hypothesizes improvements to itself, plans and runs experiments to confirm or reject them.