Hyperparameter optimization in the 20th century was AI improving itself. Even more basic, gradient descent is a form of AI improving itself. The statement implies something that is more impressive than what it may potentially mean. Far more detail would be necessary to evaluate how impressive the claim is.
https://ai-2027.com/ has a much more in depth thought experiment, but I’m thinking AI which hypothesizes improvements to itself, plans and runs experiments to confirm or reject them.