The problem with AI isn’t new, it’s the same old problem with technology: computers don’t do what you want, only what you tell them. A lot of PRs can be judged by how well they are described and justified, it’s because the code itself isn’t that important, it’s the problem that you are solving with it that is. People are often great at defining problems, AIs less so IMHO. Partially because they simply have no understanding, partially because they over explain everything to a point where you just stop reading, and so you never get to the core of the problem. And even if you do there’s a good chance AI misunderstood the problem and the solution is wrong in a some more or less subtle way. This is further made worse by the sheer overconfidence of AI output, which quickly erodes any trust that they did understand the problem.