It's radically different from human improvement. Imagine if you were handed a notebook with a bunch of writing that abruptly ends. You're asked to read it and then write one more word. Then you have a bout of amnesia and you go back to the beginning with no knowledge of the notebook's contents, and the cycle repeats. That's what LLMs do, just really fast.
You could still accomplish some things this way. You could even "improve" by leaving information in the notebook for your future self to see. But you could never "learn" anything bigger than what fits into the notebook. You could tell your future self about a new technique for finding integrals, but you couldn't learn calculus.