My feeling is (current) AI is more of a teacher than an implementor. It really does help when learning about something new or to give you ideas about directions to take. The actual code however still needs to be written by humans for the most part it seems.
AI is a great tool and does speed things up massively, it just doesn't align with the magical thought that we provide the ideas and AI does all of the grunt work. In general, always better to form mental models about things based on actual evidence as opposed to fantasy (and there is a lot of fantasy involved at the moment). This doesn't mean being pessimistic about potential future advancements however. It is just very hard to predict what the shape of those improvements will be.