> My understanding of what MCP is - is that it’s just a standard on how to provide tools/functions to LLMs to call.

That's exactly the point the GP was making: this is a fundamentally unsafe idea. It's impossible to allow an LLM to automatically run tools in a safe way. So yes, MCP as means to enable this fundamentally unsafe use case is fundamentally unsafe.

My point is that the term “safe” is not a good one. In context of the article it erodes the scope of what MCP is and what it does (bringing way more stuff into the picture) and misattributes issue.

MCP safety is stuff like access controls, or lack of vulnerabilities on the protocol level (stuff like XML entity bombs). Software behavior of what MCP bridges together is not MCP anymore.

People discovering and running malware believing it’s legit is not a MCP problem. This line of thought is based on that “P” stands for “protocol” - MCP is interface, not implementation. And it’s humans who pick and connect programs (LLMs and MCP servers), not technology.

Not an XML bomb, but... at the protocol level, AFAIK, MCP allows the server to change its list of tools and what they do, and the description of what they do, at any point. The chatbot when it starts up, calls "tools/list" and gets a list of tools. So, if "approval for use" depends on that data, .. the approval is built on something that can shift at any time.

That, coupled with dynamic invocation driven by the LLM engine, seems like a security problem at the protocol level.

-----

Contrast to connecting an agent to a service that is defined by a spec. In this hypothetical world, I can tell my agent: "here's the spec and here's the endpoint that implements it". And then my agent will theoretically call only the methods in the spec. The endpoint may change, my spec won't.

I still need to trust the endpoint; that it's not compromised. Maybe I'm seeing a difference where there is none.