This matches my experience exactly. #3 is the one I've found most surprising, and it can work outside the context of just analyzing your own code. For example I found a case where an automated system we use started failing due to syntax changes, despite no code changes on our part. I gave Claude the error message and the context that we had made no code changes, and it immediately and correctly identified the root cause as a version bump from an unpinned dependency (whoops) that introduced breaking syntax changes. The version bump had happened four hours prior.

Could I have found this bug as quickly as Claude? Sure, in retrospect the cause seems quite obvious. But I could just as easily rabbit holed myself looking somewhere else, or taken a while to figure out exactly which dependency caused the issue.

It's definitely the case that you cannot blindly accept the LLM's output, you have to treat it as a partner and often guide it towards better solutions. But it absolutely can improve your productivity.