> When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby.

Very much aligns with my experience. For me this is the most unsatisfying thing about AI-based workflows in general, they miss stuff humans would never miss.

All the time I wonder what am I missing that's right nearby? It's remarkable how many times I have to ask Claude code to fully ingest something before it actually puts it into context. It always tries to laser through to target it's looking for, which is often not what you want it to look for, at least not all you want it to look for. Getting these models to open up their field of vision is tough.

Actually lately I’ve been feeling the other way around with it. The LLM catches things I would have overlooked. I ask for a new feature in a certain file, and the LLM suggests fixing a tangentially related file to accommodate the new feature without breaking something else. Maybe this is just the crap legacy codebase I’m working with and how tangled up everything is, but I definitely have found several times now that it caught things I would have missed.

Do you think this is inherent or an artifact of prompting? Curiosity and side quests leads to higher token usage and longer time to finish, so I could understand why current harnesses and system prompts would not encourage that sort of thing.

But what if a coding agent was prompted to be more curious during development? Like a human developer, make mental notes of alternatives to try out and chase suspicious looking code which may seem unrelated to the task at hand. It could even spawn rabbit hole agents in parallel.

Taking a step back, this probably highlights major hazard with the increased usage of LLMs for coding, which is that everyone's style of work is going to converge because most code will be written by the 2-3 most popular models using the same system prompts.

I've seen something similar, solutions generated feel very pythonic or javaesque in languages that are neither Python nor Java (C, Rust, Ruby)

I've had to explicitly direct the machine to read existing sibling code and follow the specific idioms and patterns in use.

It’s interesting to compare how the agentic search performs, with these targeted reads and lots of tool calls in the stream, versus the older but still valid paradigm of using a high-reasoning model like GPT-X-pro and feeding in all the relevant files at once with no tools.

I have found that the “pro” approach is much more holistic and able to tackle rather “creative” problems that require very careful design and the overall artifact is tight and self-consistent. — Claude Code by comparison is incredible in exploration and targeted implementation but indeed is not great at seeing the forest.

  > All the time I wonder what am I missing that's right nearby?
Add to the prompt "use coding conventions of the file which you are currently editing". That gets the machine (Opus and Sonnet at least) to go over the nearby code and occasionally mention something obvious.
[deleted]