They can improve. You can make one adjust its own prompt. But the improvement is limited to the context window.
It’s not far off from human improvement. Our improvement is limited to what we can remember as well.
We go a bit further in the sense that the neural network itself can grow new modules.
It's radically different from human improvement. Imagine if you were handed a notebook with a bunch of writing that abruptly ends. You're asked to read it and then write one more word. Then you have a bout of amnesia and you go back to the beginning with no knowledge of the notebook's contents, and the cycle repeats. That's what LLMs do, just really fast.
You could still accomplish some things this way. You could even "improve" by leaving information in the notebook for your future self to see. But you could never "learn" anything bigger than what fits into the notebook. You could tell your future self about a new technique for finding integrals, but you couldn't learn calculus.