> I have anecdotally found this to be true as well, that an LLM greatly accelerates my ramp up time in a new codebase, but then actually leads me astray once I am familiar with the project.

If you are unfamiliar with the project, how do you determine that it wasn't leading you astray in the first place? Do you ever revisit what you had done with AI previously to make sure that, once you know your way around, it was doing it the right way?

In some cases, I have not revisited, as I was happy to simply make a small modification for my use only. In others, I have taken the time to ensure the changes are suitable for upstreaming. In my experience, which I have not methodically recorded in any way, the LLM’s changes at this early stage have been pretty good. This is also partly because the changes I am making at the early stage are generally small, usually not requiring adding new functionality but simply hooking up existing functionality to a new input or output.

What’s most useful about the LLM in the early stages is not the actual code it writes, but its reasoning that helps me learn about the structure of the project. I don’t take the code blind, I am more interested in the reasoning than the code itself. I have found this to be reliably useful.

no, they just claim that AI coding tools are magic and drink their kool-aid