Typically debugging, e.g., a tricky race condition in an unfamiliar code base would require adding logging, refactoring library calls, inspecting existing logs, and even rewriting parts of your program to be more modular or understandable. This is part of the theory-building.

When you have an AI that says "here is the race condition and here is the code change to make to fix it", that might be "faster" in the immediate sense, but it means you aren't understanding the program better or making it easier for anyone else to understand. There is also the question of whether this process is sustainable: does an AI-edited program eventually fall so far outside what is "normal" for a program that the AI becomes unable to model correct responses?

This is always my thought whenever I hear the "AI let me build a feature in a codebase I didn't know in a language I didn't know" (which is often, there is at one in these comments). Great, but what have you learned? This is fine for small contributions, I guess, but I don't hear a lot of stories of long-term maintenance. Unpopular opinion, though, I know.

I guess it's a question of how anyone learns. There's some value in typing code, I suppose, but with tab complete that's been gone for a long time. Letting AI write something and then reading it seems as good as copying and pasting from some other source.

I'm not super qualified to answer as I haven't gone deep into AI at all. But from my limited observations I'd say yes and no. You generally aren't copy/pasting entire features, just snippets that you yourself have to string together in a sensible way. Of course there are lots of people who still do this and what's why I find most people in this industry infuriating to work with. It's all good when it's boilerplate, and that's actually my primary use of "AI"—it's essentially been a snippets replacement (and is quite good at that).