MCP is built on JSON-RPC 2.0. JSON-RPC 2.0 is a super lightweight, stateless protocol that lets a client and a server talk to each other using plain old JSON. It’s not tied to HTTP, but it often rides on it. It’s like the chill cousin of REST: no verbs, no URL gymnastics—just methods and parameters, passed as structured JSON.

MCP calls itself a “protocol,” but let’s be honest—it’s a framework description wrapped in protocol cosplay. Real protocols define message formats and transmission semantics across transport layers. JSON-RPC, for example, is dead simple, dead portable, and works no matter who implements it. MCP, on the other hand, bundles prompt templates, session logic, SDK-specific behaviors, and application conventions—all under the same umbrella.

As an example, I evidently need to install something called "uv", using a piped script pulled in from the Internet, to "run" the tool, which is done by putting this into a config file for Claude Desktop (which then completely hosed my Claude Desktop):

  {
    "mcpServers": {
      "weather": {
        "command": "uv",
        "args": [
          "run",
          "--with",
          "fastmcp",
          "fastmcp",
          "run",
          "C:\\Users\\kord\\Code\\mcptest\\weather.py"
        ]
      }
    }
  }
They (the exuberant authors) do mention transport—stdio and HTTP with SSE—but that just highlights the confusion here we are seeing. A real protocol doesn’t care how it’s transported, or it defines the transport clearly. MCP tries to do both and ends up muddying the boundaries. And the auth situation? It waves toward OAuth2.1, but offers almost zero clarity on implementation, trust delegation, or actual enforcement. It’s a rats nest waiting to unravel once people start pushing for real-world deployments that involve state, identity, or external APIs with rate limits and abuse vectors.

This feels like yet another centralized spec written for one ecosystem (TypeScript AI crap), claiming universality without earning it.

And let’s talk about streaming vs formatting while we’re at it. MCP handwaves over the reality that content coming in from a stream (like SSE) has totally different requirements than a local response. When you’re streaming partials from a model and interleaving tool calls, you need a very well-defined contract for how to chunk, format, and parse responses—especially when tools return mid-stream or you’re trying to do anything interactive.

Right now, only a few clients are actually supported (Anthropic’s Claude, Copilot, OpenAI, and a couple local LLM projects). But that’s not a bug—it’s the feature. The clients are where the value capture is. If you can enforce that tools, prompts, and context management only work smoothly inside your shell, you keep devs and users corralled inside your experience. This isn’t open protocol territory; it’s marketing. Dev marketing dressed up as protocol design. Give them a “standard” so they don’t ask questions, then upsell them on hosted toolchains, orchestrators, and AI-native IDEs later. The LLM is the bait. The client is the business.

And yes, Claude helped write this, but it's exactly what I would say if I had an hour to type it out clearly.