Humans also have limited context. For LLMs it's mostly a question of pipeline engineering to pack the context and system prompt with the most relevant information, and allow tool use to properly understand the rest of the codebase. If done well I think they shouldn't have this particular issue. Current AI coding tools are mostly huge amounts of this pipeline innovation.
I think we need a LLM equivalent of this part's of fitt's law: The fastest place to click under a cursor is the location of the cursor. For an LLM the least context-expensive feedback is no feedback at all, the LLM should be able to intuit the correct code in-place, at token generation.