No that's not what I'm saying. If you tell an LLM that you need a report on a specific member of congress and provide a prompt saying you can use bash tools like grep/curl/ping/git/etc... just return bash then a formatted code block

Or you can use fetch_record followed by a formatted code block of the name of a google search you want to perform.

The LLM will likely use bash and curl because it NATIVELY knows what it is and is capable of, while this other tool you have to feed it all these parameters that it is not used to.

I'm not saying go ahead and throw that in chatgpt, I'm talking from experience at our company using MCP vs bashable stuff, it keeps ignoring the other tools.

It's possible that its not about "native knowledge" but about how the descriptions (which get mapped into the prompt) for each of the tools are setup (or even their order; LLM behavior can be very sensitive to not-obviously-important prompt differences.)

I'd be cautious inferring generalizations about behavior and then explanations of those generalizations from observation of a particular LLM used via a particular toolchain.

That said, that it does that in that environment is still an interesting observation.