My little anecdote of breaking the spell. Really I might not been truly under the spell, but I had to go far in to my project to loose the "magic" of the code. The trick was simply going back to a slower way of using it with a regular chat window. Then really reading the code and interrogation everything that looks odd. In my case I saw a .partial_cmp(a).unwrap() in my rust code and went ahead an asked is there an alternative. The LLM returned .total_cmp(a) as an alternative. I continued on asking why it generated the "ugly" unwrap, LLM returned that it didn't become available later version of rust with only a tiny hint of that it .partial_cmp is more common in the original trainingsets. The final shattering was simply asking it why it used .partial_cmp and got back "A developer like me... ". No it is an LLM, there is somewhere in the system prompt to anthropomorphize the responses and that is the subtle trick beyond "skinner box" of pulling the lever hoping to get useful output. There are a bunch of subtle cues that hijacks the brain of treating the LLM like a human developer. So when going back to the agentic flow in my other projects I try to disabling these tricks in my prompts and the AGENTS file and the results are more useful and I'm more prone to realizing when the output has sometimes has outdated constructs and be more specific on what version of tooling I'm using. Occasionally scraping whole branches when I realize that it is just outdated practices or simply a bad way of doing things that are simply more common in the original training data, restarting with the more correct approaches. Is it a game changer... no but it makes it more like a tool that I use instead of a developer of shifting experience level.