I buy the productivity argument, but I’m not convinced “30 minutes reading/tweaking agent output” is equivalent for learning to building it yourself.

If your goal is the feature, then yes: letting the agent do the heavy lifting and reviewing the diff afterward is a huge win.

But if your goal is understanding / skill-building, the hard part usually isn’t seeing a working solution. It’s doing the messy work of (a) making design choices, (b) getting stuck, (c) debugging, and (d) forming the mental model that lets you reproduce it later. Reviewing a correct implementation can create a feeling of “I get it,” but that feeling often doesn’t survive a blank file.

I’ve noticed this in my own hobby coding: LLMs are great for familiarity and unblocking progress, but the learning “sticks” much more when I’ve had to struggle through the failure modes myself. I’m watching the same dynamic play out with my son using ChatGPT to study for physics/calculus . . . it feels deep for him in the moment with the LLM, but exam-style transfer exposes the gaps.

If I had four hours to dedicate to this particular learning project I would still use LLMs to help me along the way, with the expectation that I'd learn more from those four hours than I would if I'd spent the same amount of time deliberately not using LLMs to help me.

We've been given a tool that lets us ask questions in human language and get back answers that are correct 90% of the time! And that remaining 10% means we have to engage critically with those answers, which is a useful learning trick in its own right.