an LLM can't learn without adding new data and a training run. so it's impossible for it to "self improve" by itself.

I'm not sure how much an agent could do though given the right tools. access to a task mgt system, test tracker. robust requirements/use cases.

I don't have the link on hand, but people have already proven that LLMs can both generate new problems for themselves and train on them. Not sure why it would be surprising though - we do it all the time ourselves.

> an LLM can't learn without adding new data and a training run.

That's probably the next big breakthrough