It really resonates with me, I often run into this situation when I'm trying to fix a bug with llm: if my first prompt is not good enough, then I end up stuck in a loop where I keep asking llm to refine its solution based on the current context.

The result is llm still doesn't output what I want even after 10 rounds of fixing requests.

so I just start a new session and give llm a well-crafted prompt, and suddenly it produce a great result.