What you shouldn't forget also is that, while AI may be good at coming up with a "first shot" solution, it may be much worse if you want to change/correct parts of it.

In my experience, AI very often gets into a sort of sunk-cost fallacy (sunk prompt?) and then it is very hard to get it to make significant changes, especially architecturally.

I recently wrote an extension for a popular software product and gave AI the same task. It created a perfectly working version however it was 5x the lines of code of my version because it didn't know the extension API as well, even though I gave it the full documentation. It also hard coded some stuff/solutions to challenges that we totally don't want to be hard coded. A big reason why I arrived at a much better solution was that I used a debugger to step through the code and noted down just the API interactions I needed.

The AI also was convinced that some things were entirely impossible. By stepping through the code I saw that they would be possible by using parts of the internal API. I suggested a change to make the public API better for my use case in a GitHub issue and now it is totally not impossible.

At the end of the day I have to conclude that, the amount of time invested guiding and massaging the AI was too much and not really worth it. I would've been better off debugging the code right away and then creating my own version. The potential for AI to do the 80% is there. At this time though I personally can't accept its results yet but that may also be due to my personal flavour of perfectionism.