I think makers of LLM “chat bots” like the Claude desktop or cursor have a ways to go when it comes to exposing precisely what the LLM is being promoted.

Because yes, for the LLM to find the MCP servers it needs that info on its prompt. And the software is currently hiding how that information is being exposed. Is it prepended to your own message? Does it put it at the start of the entire context? If yes, wouldn’t real-time changes in tool availability invalidate the entire context? So then does it add it to end of the context window instead?

Like nobody really has this dialed in completely. Somebody needs to make a LLM “front end” that is the raw de-tokenized input and output. Don’t even attempt to structure it. Give me the input blob and output blob.

… I dunno. I wish these tools had ways to do more precise context editing. And more visibility. It would help make more informed choices on what to prompt the model with.

/Ramble mode off.

But slightly more serious; what is the token cost for a MCP tool? Like the llm needs its name, a description, parameters… so maybe like 100 tokens max per tool? It’s not a lot but it isn’t nothing either.