The reality is that you can't have AI do too much for you or else you completely lose track of what is happening. I find it useful to let it do small stupid things and use it for brainstorming.
I don't like it to do complete PR's that span multiple files.
I don't think the "complete PR spanning multiple files" is an issue actually.
I think the issue is if you don't yourself understand what it's doing. If all you do is to tell it what the outcome should be from a user's perspective, you check that that's what it does and then you just merge. Then you have a problem.
But if you just use it to be faster at getting the code you would've liked to write yourself, or make it write the code you'd have written if you had bothered to do that boring thing you know needs to be done but never bothered to do, then it's actually a great tool.
I think in that case it's like IDE based refactorings enabled by well typed languages. Way back in the day, there were refactorings that were a royal pain in the butt to do in our Perl code base. I did a lot of them but they weren't fun. Very simple renames or function extractions that help code readability just aren't done if you have to do them manually. If you can tell an IDE to do a rename and you're guaranteed that nothing breaks, it's simply a no brainer. Anyone not doing it is simply a bad developer if you ask me.
There's a lot of copy and paste coding going on in "business software". And that's fine. I engage in that too, all the time. You have a blueprint of how to do something in your code base. You just need to do something similar "over there". So you know where to find the thing to copy and paste and then adjust. The AI can do it for you even faster especially if you already know what to tell it to copy. And in some cases all you need to know is that there's something to copy and not from where exactly and it'll be able to copy it very nicely for you.
And the resulting PR that does span multiple files is totally fine. You just came up with it faster than you ever could've. Personally I skipped all the "Copilot being a better autocomplete" days and went straight into agentic workflows - with Claude Code to be specific. Using it from within IntelliJ in a monorepo that I know a lot about already. It's really awesome actually.
The funny thing is that at least in my experience, the people that are slower than you doing any of this manually are not gonna be good at this with AI either. You're still gonna be better and faster at using this new tool than they were at using the previously available tools.
> You just need to do something similar "over there". So you know where to find the thing to copy and paste and then adjust. The AI can do it for you even faster especially if you already know what to tell it to copy. And in some cases all you need to know is that there's something to copy and not from where exactly and it'll be able to copy it very nicely for you.
The issue with this approach is the mental load of verifying that the it did the thing you asked for correctly. And that it did not mess something like a condition expression.
My belief is that most developers don't interact with their code more than character on the screens. Their editing process is clicking, selecting, and moving character by character. Which make their whole experience painful for anything that involves a bit of refactoring.
When you exploit things like search based navigation (project or file based), indexing (LSP or IDE intellisense), compilers/linters/test runners report (going directly to the line mentioned), semantic navigation and manipulation (keyboard based), and the knowledge of few extra tools like (git, curl, jq,...) you'll have a far pleasant experience with coding. Editing is effortless in that case. You think about a solution and it's done.
Coding is literally the most enjoyable part of the job for me. What's not enjoyable is the many WTFs when dealing with low quality code and having to coax specifications from teammates.
The "trick" would be to make it more like a pair programming session than code review.
Also agreed! So many times when pairing with others it's like that. It's very painful to see other people debug in many cases. Or write code / interact with their tooling. But then there are also some where it's a ray of light. People that know their tooling just as well as you do or maybe even better and you learn a thing or two.I love it when I come out of a pairing session and I've learned something that I can incorporate.
And it pains me when I've used something, maybe even specifically called out how I do something, for the n-th time with someone and they still don't catch on. And it doesn't matter if they don't pick it up by themselves or whether it's something that's one of their improvements to work on because we literally talked about it in the last seven 1:1s or something. Some people "just don't get it" unfortunately. Some people really just aren't cut out to be devs. AI or not.
Yes but ;) As in, agreed on effective tool use being awesome but unfortunately more rare than I would like. But there are other people "like you and me" out there. Sometimes we have the fortune to work with them. It's such a delight! I love working with them. I love just working with someone that's on the same level and we can pair on an equal level and get shit done. It's rare though.It's not just done though. It's still work in many cases and some of that really can be improved with this new tool: AI. Just like we were able to replace a 30 minute Perl refactoring done manually with a few seconds IDE refactoring in Kotlin (or whatever language floats your boat/happens to be used where you are)
I'm not sure I understand this part to be honest. I don't usually coax specifications from teammates. I coax them from Product people or customers and while it's not really the most fun sometimes, personally, I do find joy in the fact that I am delivering something that helps the customer. I enjoy fixing a bug both because I like the hunt for the root cause (something AI really isn't great at doing by itself from my experience yet - but I do enjoy working with it) and because I like it when I can deliver the fix to the customer fast. Customer reported a bug this morning and by the end of the day they have a fix. That's just awesome. Cloud FTW. Gone are the days of getting assigned a bug someone triaged 6 months ago and it will go out with a release 3 months from now, ensuring the customer gets the fix installed a year plus from when they reported it (coz of course their admins don't install a new release the day it comes out, right?)> I don't usually coax specifications from teammates. I coax them from Product people or customers and while it's not really the most fun sometimes, personally, I do find joy in the fact that I am delivering something that helps the customer.
It's when you're dependent on a service and there's no documentation. Even if you can read the code (and if you can't, you probably should learn), it's better to ask the person that worked on it (instead of making too many assumptions). And that's when the coaxing comes into play.
Fully agreed.
In my view, effective coding agent use boils down to being good at writing briefs as you would for any ticket. The better formatting, detail, and context you can provide BOTH on an outcome level and a technical architecture level, the better your results are.
To put it another way: If before LLMs came along you were someone who (purposely or otherwise) became good at writing for documentation and briefing tickets for your team, I think there's a decent chance you're going further with these agentic tools than others that just shove an idea into it and hope for the best.