The version of this I encounter literally every day is:

I ask my coding agent to do some tedious, extremely well-specified refactor, such as (to give a concrete real life example) changing a commonly used fn to take a locale parameter, because it will soon need to be locale-aware. I am very clear — we are not actually changing any behavior, just the fn signature. In fact, at all call sites, I want it to specify a default locale, because we haven't actually localized anything yet!

Said agent, I know, will spend many minutes (and tokens) finding all the call sites, and then I will still have to either confirm each update or yolo and trust the compiler and tests and the agents ability to deal with their failures. I am ok with this, because while I could do this just fine with vim and my lsp, the LLM agent can do it in about the same amount of time, maybe even a little less, and it's a very straightforward change that's tedious for me, and I'd rather think about or do anything else and just check in occasionally to approve a change.

But my f'ing agent is all like, "I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?"

And in that moment I guess I know why some people say having an LLM is like having a junior engineer who never learns anything.

Claude 4.7 broke something while we were working on several failing tests and justified itself like this:

> That's a behavior narrowing I introduced for simplicity. It isn't covered by the failing tests, so you wouldn't have noticed — but strictly speaking, [functionality] was working before and now isn't.

I know that a LLM can not understand its own internal state nor explain its own decisions accurately. And yet, I am still unsettled by that "you wouldn't have noticed".

> Maybe we should just commit the signature change with a TODO

I'm fascinated that so many folks report this, I've literally never seen it in daily CC use. I can only guess that my habitually starting a new session and getting it to plan-document before action ("make a file listing all call sites"; "look at refactoring.md and implement") makes it clear when it's time for exploration vs when it's time for action (i.e. when exploring and not acting would be failing).

At the risk of being That Old Guy, this seems like a pretty bad workflow regression from what ctags could do 30 years ago

That’s my daily experience too. There are a few more behaviours that really annoys me, like: - it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened - or it wants to run some a command, I click the “nope” button and it just outputs “the user didn’t approve my command, I need to try again” and I need to click “nope” 10 more times or yell at it to stop - and the absolute best is when instead of just editing 20 lines one after another it decides to use a script to save 3 nanoseconds, and it always results in some hot mess of botched edits that it then wants to revert by running git reset —hard and starting from zero. I’ve learned that it usually saves me time if I never let it run scripts.

> it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened

Reminds us of the most important button the "AI" has, over the similarly bad human employee.

'X'

Until, of course, we pass resposibility for that button to an "AI".

The other day Codex on Mac gained the ability to control the UI. Will it close itself if instructed though? Maybe test that and make a benchmark. Closebench.

My point was more: will it stop the user closing it?

Make it write a script with dry run and a file name list.

You’ll be amazed how good the script is.

My agent did 20 class renames and 12 tables. Over 250 files and from prompt to auditing the script to dry run to apply, a total wall clock time of 7 minutes.

Took a day to review but it was all perfect!

Refactoring already exists.

Asking for code to manipulate the AST is another route. In python it can do absolute magic.

You can do that in IntelliJ in about 15 seconds and no tokens...

Indeed you can! I don't use IntelliJ at work for [reasons], and LSP doesn't support a change signature action with defaults for new params (afaik). But it really seems like something any decent coding agent ought be able to one shot for precisely this reason, right?

Using a LLM for these tasks really is somewhat like using a Semi to shuttle your home groceries. Absolutely unnecessary, and can be done via a scooter. But if a Semi is all you have you use it for everything. So here we are.

The real deal is, while a Semi can do all the things you can do with a scooter, the opposite is not true.

If it’s a compiled language, just change the definition and try to compile.

Indeed! You would think it would have some kind of sense that a commit that obviously won't compile is bad!

You would think.

It would be one thing if it was like, ok, we'll temporarily commit the signature change, do some related thing, then come back and fix all the call sites, and squash before merging. But that is not the proposal. The plan it proposes is literally to make what it has identified as the minimal change, which obviously breaks the build, and call it a day, presuming that either I or a future session will do the obvious next step it is trying to beg off.

Pretty sure it’s a harness or system prompt issue.

I have never seen those “minimal change” issues when using zed, but have seen them in claude code and aider. Been using sonnet/opus high thinking with the api in all the agents I have tested/used.

On my compiled language projects I have a stop hook that compiles after every iteration. The agent literally cannot stop working until compilation succeeds.

In the case I described no code changes have been made yet. It's still just planning what to do.

It's true that I could accept the plan and hope that it will realize that it can't commit a change that doesn't compile on its own, later. I might even have some reason to think that's true, such as your stop hook, or a "memory" it wrote down before after I told it to never ever commit a change that doesn't compile, in all caps. But that doesn't change the badness of the plan.

Which is especially notable because I already told it the correct plan! It just tried to change the plan out of "laziness", I guess? Or maybe if you're enough of an LLM booster you can just say I didn't use exactly the right natural language specification of my original plan.

I think your expectations are too high. Just understand the limitations and go with the flow.

You need to use explicit instructions like "make a TODO list of all call sites and use sub agents to fix them all".

whats your setup?