I hope that someday LLMs will interact with code mostly via language servers, rather than reading the code itself (which both frequently confuses the LLM, as you've noted, but is also simply a waste of tokens).
I hope that someday LLMs will interact with code mostly via language servers, rather than reading the code itself (which both frequently confuses the LLM, as you've noted, but is also simply a waste of tokens).
why? I suspect that writing code itself is extremely token efficient (unless like your keywords happen to be silly, super-long alien text).
Like which do you think is more token-efficient?
1)
2)Not sure I follow. You seem to have omitted the part of 1) explaining how the LLM knew that my_function even existed - presumably, it read the entire file to discover that, which is way more input tokens than your hypothetical available_functions response.
Reading files is not that input tokien heavy, I suspect. But anyways I omitted it because presumably it would have done so to gain local context in general.
LSP is meant for IDEs and very deterministic calls. Its APIs are like this: give me a definition of <file> <row> <column> <lenght>. This makes sense for IDEs because all of those can be deterministically captures based of your cursor position.
LLMs are notoriously bad at counting.
I think one could easily build an MCP tool wrapping LSP which smooths over those difficulties. What the LLM needs is just a structured way to say "perform this code change" and a structured way to ask things like "what's the definition of this function?" or "what functions are defined in this module?"
Not much different from what agents already do today inside of their harnesses, just without the part where they have to read entire files to find the definition of one thing.
So not using LSP, but rather using something in a middle that uses LSP as implementation detail.
So far enabling LSP in Claude only added messages like "this is old diagnostic before my edit".