What about remote MCPs lend themselves to security? For instance, do you think that it is more secure than a traditional endpoint?

MCPs are basically just JSON-rpc. The benefit is that if you have applications that require an API key, you can build a server to control access (especially for enterprise). It's the same as REST apis, except by following a specific convention we can take advantage of generic tools (like the one I built) and means you don't need to rely on poor documentations to connect or train a model to use your very specific CLI.

But if you have customer facing APIs then all of these problems were already solved in an enterprise context. You can force an oauth flow from skills if you want.

I don’t think that CLIs are the path forward either, but you certainly don’t have to teach a model how to use them. We’ve made internal CLIs that adhere to no best practices and expose limited docs. Models since 4o have used them with no issue.

The amount of terminal bench data is just much higher and more predictable in rl environments. Getting a non thinking model to use an MCP server, even hosted products, is an exercise in frustration compared to exposing a cli.

A lot of our work is over voice, and I’ve found zero MCPs that I haven’t immediately wanted to wrap in a tool. I’ve actually had zero MCPs perform at all (most recently last week with a dwh MCP and opus 4.6, where even the easiest queries did not work at all).

LLMs don't care about mcp vs CLI. CLIs enable LLMs to fetch/mutate data and build scripts with the same program. I think of it like a Linux dev in a box. Sometimes you want to just call a tool, sometimes you want to write a small program that calls that tool instead.