So why do I have to make an MCP server? Could it not just hook into the OpenAPI JSON spec?
And who will define the credentials? The OpenAPI spec defines the credentials. MCP doesn't even allow for credentials, it seems, for now. But I don't think deleting a requirement is a good thing in this instance. I would like to have an API that I could reach from anywhere on the net and could secure with, for instance, an API key.
And what is the URL? You have to define this for MCP also. For instance, in Cursor, you have to manually enter the endpoint with a key named "url."
How will the LLM get that info? This was shown to be easily 1.5 years ago with GPT's easy understanding of the OpenAPI spec and its ability to use any endpoint on the net as a tool.
I don't disagree that there needs to be a framework for using endpoints. But why can't it reach out to an OpenAPI endpoint? What do we gain from using a new "protocol"? I created a couple of MCP servers, and it just feels like going back 10 years in progress for creating and documenting web APIs.
Let me ask you this in reverse then: Have you created a basic API and used it as a tool in a GPT? And have you created an MCP server and added it to applications on your computer? If you have done both and still feel that there is something better with MCP, please tell, because I found MCP to be solving an issue that didn't need solving.
Create an awesome framework for reaching out to Web APIs and read the OpenAPI definition of the endpoint? GREAT! Enforce a new Web API standard that is much less capable than what we already have? Not so great.
You seem to miss that an MCP server IS an HTTP server already. It's not just safe to expose it to the net and contains a new and limited spec for how to document and set it up.