> problem with MCP right now is that LLMs don't natively know what it is

Most models that it is used with natively know what tools are (they are trained with particular prompt formats for the use of arbitrary tools), and the model never sees MCP at all, it just sees tool definitions, or tool responses, in the format it expects in prompts. MCPs are a way to communicate information about tools to the toolchain running the LLM, when the LLM sees information that came via MCP it is indistinguishable from tools that might be built into the toolchain or provided by another mechanism.

No that's not what I'm saying. If you tell an LLM that you need a report on a specific member of congress and provide a prompt saying you can use bash tools like grep/curl/ping/git/etc... just return bash then a formatted code block

Or you can use fetch_record followed by a formatted code block of the name of a google search you want to perform.

The LLM will likely use bash and curl because it NATIVELY knows what it is and is capable of, while this other tool you have to feed it all these parameters that it is not used to.

I'm not saying go ahead and throw that in chatgpt, I'm talking from experience at our company using MCP vs bashable stuff, it keeps ignoring the other tools.

It's possible that its not about "native knowledge" but about how the descriptions (which get mapped into the prompt) for each of the tools are setup (or even their order; LLM behavior can be very sensitive to not-obviously-important prompt differences.)

I'd be cautious inferring generalizations about behavior and then explanations of those generalizations from observation of a particular LLM used via a particular toolchain.

That said, that it does that in that environment is still an interesting observation.