We had `curl`, HTTP and OpenAPI specs, but we created MCP. Now we're wrapping MCP into CLIs...

> but we created MCP. Now we're wrapping MCP into CLIs...

Next we'll wrap the CLIs into MCPs.

MCP is a dead end, just ignore it and it will go away.

And yet without MCP these CLI generators wouldn't be possible.

It building on top of them, because MCP did address some issues (which arguably could've been solved better with clis to begin with - like adding proper help texts to each command)... it just also introduced new ones, too.

Some of which still won't be solved via switching back to CLI.

The obvious one being authentication and privileges.

By default, I want the LLM to be able to have full read only access. This is straightforward to solve with an MCP because the tools have specific names.

With CLI it's not as straightforward, because it'll start piping etc and the same CLI is often used both for write and read access.

All solvable issues, but while I suspect CLIs are going to get a lot more traction over the next few months, it's still not the thing we'll settle on- unless the privileges situation can be solved without making me greenlight commands every 2 seconds (or ignoring their tendency to occasionally go batshit insane and randomly wipe things out while running in yolo mode)

Exactly. Once you start looking at MCP as a protocol to access remote OAuth-protected resources, not an API for building agents, you realize the immense value

Aside from consistent auth, that's what all APIs have done for decades.

Only takes 2 minutes for an agent to sort out auth on other APIs so the consistent auth piece isn't much of a selling point either.

Yes, MCP could've been solved differently - eg with an extension to the openapi spec for example, at least from the perspective of REST APIs... But you're misunderstanding the selling point.

The issue is that granting the LLM access to the API needs something more granular then "I don't care, just keep doing whatever you wanna do" and getting promoted every 2 seconds for the LLM to ask the permission to access something.

With MCP, each of these actions is exposed as a tool and can be safely added to the "you may execute this as often as you want" list, and you'll never need to worry that the LLM randomly decides to delete something - because you'll still get a prompt for that, as that hasn't been whitelisted.

This is once again solvable in different ways, and you could argue the current way is actually pretty suboptimal too... Because I don't really need the LLM to ask for permission to delete something it just created for example. But the MCP would only let me whitelist action, hence still unnecessary security prompts. But the MCP tool adds a different layer - we can both use it as a layer to essentially remove the authentication on the API you want the LLM to be able to call and greenlight actions for it to execute unattended.

Again, it's not a silver bullet and I'm sure what we'll eventually settle on will be something different - however as of today, MCP servers provide value to the LLM stack. Even if this value may be provided even better differently, current alternative all come with different trade-offs

It’s not, they are a big unlock when using something like cursor or copilot. I think people who say this don’t quite know what MCP is, it’s just a thin wrapper around an API that describes its endpoints as tools. How is there not a ton of value in this?

MCP is the future in enterprise and teams.

It's as you said: people misunderstand MCP and what it delivers.

If you only use it as an API? Useless. If you use it on a small solo project? Useless.

But if you want to share skills across a fleet of repos? Deliver standard prompts to baseline developer output and productivity? Without having to sync them? And have it updated live? MCP prompts.

If you want to share canonical docs like standard guidance on security and performance? Always up to date and available in every project from the start? No need to sync and update? MCP resources.

If you want standard telemetry and observability of usage? MCP because now you can emit and capture OTEL from the server side.

If you want to wire execution into sandboxed environments? MCP.

MCP makes sense for org-level agent engineering but doesn't make sense for the solo vibe coder working on an isolated codebase locally with no need to sandbox execution.

People are using MCP for the wrong use cases and then declaring them excess when the real use case is standardizing remote delivery and of skills and resources. Tool execution is secondary.

So just to clarify, in your case you're running a centralized MCP server for the whole org, right?

Otherwise I don't understand how MCP vs CLI solves anything.

Correct.

Centralized MCP server over HTTP that enables standardized doc lookup across the org, standardized skills (as MCP prompt), MCP resources (these are virtual indexes of the docs that is similar to how Vercel formatted their `AGENTS.md`), and a small set of tools.

We emit OTEL from the server and build dashboards to see how the agents and devs are using context and tools and which documents are "high signal" meaning they get hit frequently so we know that tuning these docs will yield more consistent output.

OAuth lets us see the users because every call has identity attached.

MCP only exists because there's no easy way for AI to run commands on servers.

Oh wait there's ssh. I guess it's because there's no way to tell AI agents what the tool does, or when to invoke it... Except that AI pretty much knows the syntax of all of the standard tools, even sed, jq, etc...

Yeah, ssh should've been the norm, but someone is getting promoted for inventing MCP

No it’s more like - because AI can’t know every endpoint and what it does, so MCP allows for injecting the endpoints and a description into context so the ai can choose the right tool without additions steps

Agents can't write bash correctly so... I wonder about your claim

They cannot? We have a client from 25 years ago and all the devops for them are massive bash scripts; 1000s of them. Not written by us (well some parts as maintenance) and really the only 'thing' that almost always flawlessly fixes and updates them is claude code. Even with insane bash in bash in bash escaping and all kinds of not well known constructs. It works. So we habe no incentive to refactor or rewrite. We did 5 years ago and postponed as we first had to rewrite their enormous and equally badly written ERP for their factory. Maybe that would not have happened either now...

[deleted]

[dead]