I've used the analogy of a circular saw before ("it's not really sawing, because you can't feel the wood...")
It's easy to just slab on a Skil saw, cut through the beam and it'll be somewhat straight. But when every manual stroke counts, there's enough time on a human time scale to correct every little mistake. It's definitely possible to become skilled at using the circular saw, but it takes effort that it feels like you don't need at first.
This is similar. LLMs are so powerful for writing code that it's easy to become complacent and forget your role as the engineer using the tool: guaranteeing correctness, security, safety and performance of the end result. When you're not invested in every if-statement, forgetting to check edge cases is really easy to do. And as much as I like Claude writing test cases for me, I also have to ensure the coverage is decent, that the implicit assumptions made about external library code is correct, etc. It takes a lot of effort to do it right. I don't know why Mycelium thinks they invented interfaces for module boundaries, but I'm pretty sure they are still as suceptible to that "0" not behaving as you'd expect, or the empty string being interpreted as "missing." Or the CSG algorithm working, except if your hole edges are incident with some boundary edges.
Edit: spelling
Your analogy with a Skil saw is genius! You can cut much faster but it's also much more dangerous. Just like the AI indeed.