The biggest thing that has changed in my experience (at least in a professions setting) is now that people have AI agents they don’t really have any motivation to improve. If you tell them something that needs to be changed they just reprompt the agent until it’s good enough - but the most sinister thing is they keep making the same mistakes over and over again. There is no growth, no shared understanding that disseminates through review - just re-prompting. They often just directly use my review comments as prompts! People don’t understand code they generated themselves just a few days later. But not in a “oh just let me reread this again real quick” kind of way but a “I have absolutely no clue wtf I am even looking at” way.
I’ve been sounding the alarm in my own circles about the lack of junior roles now because of AI - which will lead to a shortage of seniors in just a few years - but there is something even more sinister: juniors no longer improve enough to be intermediates and seniors, and worse…seniors and intermediates have regressed to juniors through laziness and cognitive offloading.
Like if I’m just sending code review to a middle man prompter - why not just skip the middle man? I’m already wrangling a handful of AI agents myself every day, so what is even the point of this extra person anyway? I don’t want to replace people with AI but if the person is so lazy that even I would probably prefer just doing the prompting myself then why shouldn’t I replace them with AI?
That does sound like an intractable problem.
My problem, if and when I get started, would be tangential to this. It is clear that communication with LLMs is changing so rapidly that there may not be any universal long-lived lessons to be learned from optimizing your interactions with a particular model.
I know that one-shotting things is probably not best, but determining how far to take it and when to cut over and finish it myself is something that I want to learn, but perhaps not too well.
My skills are an eclectic mix of high- and low- level. I know exactly what, for example, a frequency analyzer can do for me, but controlling the $400K frequency analyzer is often best left to the guy who lives and breathes it.
Likewise, my debugging skills are exceptional, but I am not as proficient with any particular debugger as are people who live in the debugger daily because they write terrible code. My debugging skills are mostly predicated on a big part of your daily life -- reading code.
(To be fair, I have known a very few people who live in the debugger because they are dealing with intractable problems caused by other people, but those are the rarities. I, myself, used to live in the debugger a lot when I was writing graphics drivers for the mostly undocumented Windows 3.1.)
Which brings us to your reports and/or co-workers. These people have always existed. They pride themselves on and partly base their value on and derive their value from the tools they think they know inside-out.
In truth, they don't know the tools, but they are intimately familiar with the controls of the tool, like a child who knows how to make a smartphone do exactly what their parent needs it to do.
So, as long as it's a tool you need, but it's too painful for you to control directly, these people are useful. In your case, you already have cause to use the LLM directly on a regular basis, so, as you point out, the value of these people is diminishing and maybe already negative.
> why shouldn’t I replace them with AI?
You probably should. Or, at a minimum, if possible, you should restructure things so that the people who are doing things that you are already proficient at are doing them for someone else who isn't as proficient at the tools, and you can get out of that loop.
One reason I am not yet completely insane is that I realized about 40 years ago that the place I hated most being was inside someone else's debug loop. Because most people are objectively stupid, and this goes double for people who need you in that loop. So I always work to structure my responsibilities and work setup to avoid this. If I find a bug in an internal supplier's code, I create an MVCE and hand it over to them. If an internal customer claims to find a bug in my code and doesn't provide an MVCE, I figure out what they are attempting to do, create my own MVCE for their function, and either fix it if it was really my problem, or hand it back to them, and ask them to expand on it until it breaks and get back to me.
Reflecting on this, I realize that I am probably not too likely to succumb to interminable prompting loops, because that wouldn't feel much different to what I nave avoided most of my life. On the few occasions over the last four decades where being involved in someone else's debug loop was completely unavoidable, the most useful thing I brought to the table when they were out of ideas and ready to throw a lot of effort at trying random things was a series of questions like "What are you going to learn from that? What will your decision points be?"
And I'm not much of a gambler, so I won't be spending too many tokens hoping "the next time, for sure!"