I admit I’m not a heavy LLM user but AFAIK LLMs don’t recall past interactions, which is something we usually expect from people (including devs).
How meaningful can this be? For example: - LLM generates code for a web api but initially ignores security concerns and some other requirements - you notice this and prompt the LLM to improve the code - the LLM takes security into account - the day after, you give another task to the LLM and it again ignores security practices you mentioned earlier. And so on.
If an intern were to do this, you would take them aside and strongly recommend to learn from past experiences and suggestion, but this doesn’t seem possible with LLMs.