How likely are we to look back on Agent/MCP/Skills as some early Netscape peculiarity? I would dive into adoption if I didn't think some new thing would beat the paradigm in a fortnight.

I've built a number of MCP servers, including an MCP wrapper. I'd generally recommend you skip it unless you know you need it. Conversely, I'd generally recommend you write up a couple skills ASAP to get a feel for them. It will take you 20 minutes to write and test some.

MCP does three things conceptually: it lets you build a bridge between an agent and <something else>, it specifies a UI+API layer between the bridge and the LLM, and it formalizes the description of that bridge in a tool-calling format.

It's that UI+API layer that's the biggest pain in the ass, in my opinion. Sometimes you need it; for instance, if you wanted an agent to access your emails, a high quality MCP server that can't destroy your life through enthusiastic tool calling makes sense.

If, however, you have, say a CLI tool or simple API that's reasonably self documenting and you're willing to have it run, and/or if you need specific behavior with a different context setting, then a skill can just be a markdown file that explains what, how, why.

Agreed. I use only one MCP server regularly and it’s a custom one integrated into my QT desktop app. It has tools for inspecting the widget tree, using selectors to click/type/etc, and take screenshots. Functionality that would otherwise be hard or impossible to reliably implement using CLI calls but gives Claude a closed feedback loop.

All public MCP server I’ve seen have been a disaster with too many tools and tokens polluting the context. It’s really most useful when you need tight integration with some other environment and can write a little custom wrapper to provide it.

> All public MCP server I’ve seen have been a disaster with too many tools and tokens polluting the context.

People like to shit on Copilot's UX but something it does well is making it incredibly easy to switch off individual tools you don't need per MCP server. In general I've found its MCP story the best out of all of them (Codex/CC/Gemini), it utilizes VSCode extensions integration very well.

If you know you need them though, do use them. There are four MCP servers I use regularly and they're enormously useful. They're all around the same topic though - pulling in context/data from sources. One is dual-use, in that I occasionally also use it for things like dashboard generation.

I will say, when using MCP be selective about which tools you enable. A lot of the time they come with say 30 tools and you only personally care about 5 of them. The other 25 are just rotting your context.

Agent/MCP/Skills might be "Netscape-y" in the sense that today's formats will evolve fast. But Netscape still mattered: it lost the market, not the ideas. The patterns survived (JavaScript, cookies, SSL/TLS, progressive rendering) and became best practices we take for granted.

The durable pattern here isn't a specific file format. It's on-demand capability discovery: a small index with concise metadata so the model can find what's available, then pull details only when needed. That's a real improvement over tool calling and MCP's "preload all tools up front" approach, and it mirrors how humans work. Even as models bake more know-how into their weights, novel capabilities will always be created faster than retraining cycles. And even if context becomes unlimited, preloading everything up front remains wasteful when most of it is irrelevant to the task at hand.

So even if "Skills" gets replaced, discoverability and progressive disclosure likely survive.

Yes this 100%. Every person i speak with who is excited about MCP is some LinkedIn Guru or product expert. I'm yet to encounter a seriously technical person excited by any of this.

MCP, as a concept, is a great idea.

The problem isn’t having a standard way for agents to branch out. The problem is that AI is the new Javascript web framework: there’s nothing wrong with frameworks, but when everyone and their son are writing a new framework and half those frameworks barely work, you end up with a buggy, fragmented ecosystem.

I get why this happens. Startups want VC money, established companies then want to appear relevant, and then software engineers and students feel pressured to prove they’re hireable. And you end up with one giant pissing contest where half the players likely see the ridiculousness of the situation but have little choice other than to join party.

I have found MCPs to be very useful (albeit with some severe and problematic limitations in the protocol's design). You can bundle them and configure them with a desktop LLM client and distribute them to an organization via something like Jamf. In the context I work in (biotech) I've found it a pretty high-ROI way to give lots of different types of researchers access to a variety of tools and data very cheaply.

I believe you, but can you elaborate? What exactly does MCP give you in this context? How do you use it? I always get high level answers and I'm yet to be convinced, but i would love this to be one of those experiences where i walk away being wrong and learning something new.

Sure, absolutely. Before I do, let me just say, this tooling took a lot of work and problem solving to establish in the enterprise, and it's still far from perfect. MCPs are extremely useful IMO, but there are a lot of bad MCP servers out there and even good ones are NOT easy to integrate into a corporate context. So I'm certainly not surprised when I hear about frustrations. I'm far from an LLM hype man myself.

Anyway: a lot of earlier stages of drug discovery involve pulling in lots of public datasets, scouring scientific literature for information related to a molecule, a protein, a disease, etc. You join that with your own data and laboratory capabilities and commercial strategy in order to spot opportunities for new drugs that you could maybe, one day, take into the clinic. This is traditionally an extremely time consuming and bias prone activity, and whole startups have gone up around trying to make it easier.

A lot of the public datasets have MCPs someone has put together around someone's REST API. (For example, a while ago Anthropic released "Claude for Life Sciences" which was just a collection of MCPs they had developed over some popular public resources like PubMed).

For those datasets that don't have open source MCPs, and for our proprietary datasets, we stand up our own MCPs which function as gateways for e.g. running SQL queries or Spark jobs against those datasets. We also include MCPs for writing and running Python scripts using popular bioinformatics libraries, etc. We bundle them with `mcpb` so they can be made into a fully configured one-click installer you can load into desktop LLM clients like Claude Desktop or LibreChat. Then our IT team can provision these fully configured tools for everyone in our organization using MDM tools like Jamf.

We manage the underlying data with classical data engineering patterns, ETL jobs, data definition catalogs, etc, and give MCP-enabled tools to our researchers as front end concierge type tools. And once they find something they like, we also have MCPs which can help transform those queries into new views, ETL scripts, etc and serve them using our non-LLM infra, or save tables, protein renderings, graphs, etc and upload them into docs or spreadsheets to be shared with their peers. Part of the reason we have set it up this way is to work through the limitations of MCPs (e.g. all responses have to go through the context window, so you can't pass large files around or trust that it's not mangling the responses). But also we do this so as to end up with repeatable/predictable data assets instead of LLM-only workflows. After the exploration is done, the idea is you use the artifact, not the LLM, to intact with it (though of course you can interact with the artifact in an LLM-assisted workflow as you iterate once again in developing a yet another derivative artifact).

Some of why this works for us is perhaps unique to the research context where the process of deciding what to do and evaluating what has already been done is a big part of daily work. But I also think there are opportunities in other areas, e.g. SRE workflows pulling logs from Kubernetes pods and comparing to Grafana metrics, saving the result as a new dashboard, and so on.

What these workflows all have in common, IMO, is that there are humans using the LLM as an aid to dive understanding, and then translating that understanding into more traditional, reliable tools. For this reason, I tend to think that the concept of autonomous "agents" is stupid, outside of a few very narrow contexts. That is to say, once you know what you want, you are generally better off with a reliable, predictable, LLM-free application, but LLMs are very useful in the prices of figuring out what you want. And MCPs are helpful there.

This is fascinating. I really appreciate the length reply.

How do you handle versioning/updates when datasets change? Do the MCPs break or do you have some abstraction layer?

What's your hit rate on researchers actually converting LLM explorations into permanent artifacts vs just using it as a one-off?

Makes sense for research workflows. Do you think this pattern (LLM exploration > traditional tools) generalizes outside domains with high uncertainty? Or is it specifically valuable where 'deciding what to do' is the hard part?

Someone else mentioned using Chrome dev tools + Cursor, I'm going to try that one out as a way to convince myself here. I want to make this work but I just feel like I'm missing something. The problem is clearly me, so I guess i need to put in some time here.

I'll give you a short reply, as another person who finds MCP very useful. I think a big gap is that MCP's are often marketed as "taking actions" for you, because that's flashy and looks cool in the eyes of laymen. While most of their actual value is the opposite, in using them to gather information to take better non-MCP actions. Connecting them to logs, read-only to (e.g. mock) databases, knowledge bases, and so on. All for querying, not for create/update/delete.

Agree with this framing. They are like RAG setups that you can compose together without needing to build a dedicated app to do it.

> How do you handle versioning/updates when datasets change?

For data MCPs, we use remote MCPs that are served over an stdio bridge. So our configuration is just mcp-proxy[0] pointed at a fixed URL we control. The server has an /mcp endpoint that provides tools and that endpoint is hit whenever the desktop LLM starts up. So adding/removing/altering tools is simply a matter of changing that service and redeploying that API. (Note: There are sometimes complications, e.g. if I change an endpoint that used to return data directly, but now it writes a file to cloud storage and returns a URL (because the result is to large, i.e. to work around the aforementioned broken factor of MCP) we have to sync with our IT team to deploy a configuration change to everyone's machine.)

I have seen nicer implementations that use a full MCP gateway that does another proxy step to the upstream MCP servers, which I haven't used myself (though I want to). The added benefit is that you can log/track which MCPs your users are using most often and how they are doing, and you can abstract away a lot of the details of auth, monitor for security issues, etc. One of the projects I've looked at in that space is Mint MCP, but I haven't used it myself.

> What's your hit rate on researchers actually converting LLM explorations into permanent artifacts vs just using it as a one-off?

Low. Which in our case is ideal, since most research ideas can be quickly discarded and save us a ton of time and money that would otherwise be spent running doomed lab experiments, etc. As you get later in the drug discovery pipeline you have a larger team built around the program, and then the artifacts are more helpful. There still isn't much of a norm in the biotech industry of having an engineering team support an advanced drug program (a mistake, IMO) so these artifacts go a long way given these teams don't have dedicated resources.

> Do you think this pattern (LLM exploration > traditional tools) generalizes outside domains with high uncertainty?

I don't know for sure, as I don't live in that world. My instinct is: I wouldn't necessarily roll something like this out to external customers if you have a well-defined product. (IMO there just isn't that much of a market for uncertain outputs of such products, which is why all of the SaaS companies that have launched their integrated AI tools haven't seen much success with them.) But even within a domain like that, it can be useful to e.g. your customer support team, your engineers, etc. For example, one of the ideas on my "cool projects" list is an SRE toolkit that can query across K8s, Loki/Prometheus, your cloud provider, your git provider and help quickly diagnose production issues. I imagine the result of such an exploration would almost always be a new dashboard/alert/etc.

[0] https://github.com/sparfenyuk/mcp-proxy - don't know much about this repo, but it was our starting point

I use only one MCP, but I use it a lot: it's chrome devtools. I get Claude Code to test in the browser, which makes a huge difference when I want it to fix a bug I found in the browser - or if I just want it to do a real world test on something it just built.

OK this is super practical, thanks for sharing! I'm going to try this out!

I have found MCPs helpful. Recently, I used one to migrate a site from WordPress to Sanity. I pasted in the markdown from the original site and told it to create documents that matched my schemas. This was much quicker and more flexible than whipping up a singular migration tool. The Sanity MCP uses oAuth so I also didn’t need to do anything in order to connect to my protected dataset. Just log in. I’ll definitely be using this method in the future for different migrations.

Don't forget A2A: https://developers.googleblog.com/en/a2a-a-new-era-of-agent-...

We'll see how many of these are around in a few years.

I'm yet to come across applications implementing A2A in real life.

Ive yet to come across applications implementing ANY AI framework in real life/production grade projects...

How likely is it to even remember “the AI stuff” 2-3 years from now? What we’re trying to do with LLMs today is extremely unsustainable. NVidia/openai will run out of silly investors eventually…

The space is moving fast enough that everything feels provisional

So like any early phase, there's risk in picking a technology to use.

Extremely likely but that doesn't mean it lacks value today

Skills are just prompt conventions; the exact form may change but the substance is reasonable. MCP, eh, it’s pretty bad, I can see it vanishing.

The agent loop architectural pattern (and that’s the relevant bit) is going to continue to matter. There will be new patterns for sure, but tool calling plus while loop (which is all an “agent” is) is powerful and highly general.

Why do you think they will fade out?

Frontier models will eventually eat all the tedious tailored add-ons as just part of something they can do.

Right now models have roughly all of the written knowledge available to mankind, minus some obscure held out private archives and so on. They have excellent skills and general abilities to construct plausible sequences of actions to accomplish work, but we need to hold their hands to really get decent performance across a wide range of activities. Skills and agent frameworks and MCP carve out different domains of that problem, with successful solutions providing training data for future models that might be able to be either generalized, or they'll be able to create a vast mountain of synthetic data following successful patterns, and make the next generation of models incredibly useful for a huge number of tasks, by default.

It might also be possible that by studying the problem, identifying where mode collapses and issues with training prevent the right sort of generalization, they might tweak the architecture and be able to solve the deficiency through normal training runs, and thereby discard the need for all the bespoke artisanal agent specifications.

To my eyes skills disappear, MCP and agent definitions do not.

You can have the most capable human available to you, a supreme executive assistant. You still have to convey your intent and needs to them, your preferences, etc, with as high a degree of specificity as necessary.

And you need to provide them with access and mechanisms to do things on your behalf.

Agentic definitions are the former, and they will evolve and grow. I like the metaphor of deal terms in financial contracts- benchmarkers document billions of these now. The "deal terms" governing the work any given entity does for you will be rich and bespoke and specific, like any valuable relationship. Even if the agent is learning about you, your governance is still needed.

MCP is the latter. It is the protocol by which a thing does things for you. It will get extensions. Skill-like directives and instructions will get delivered over it.

Skills themselves are near term scaffold that will soon disappear.

Skills are specific, contextual, and persistent (stateful) whereas LLMs are not

It isn't between llm and skill, it's between agent and skill. Orgs that invest in skills will duplicate what they could do once in an agent. Orgs that "buy" skills from a provider will need to endlessly tweak them. Multiskill workflows will have semantic layer mismatches.

Skill is a great sleight of hand for Anthropic to get people to think Claude Code is a platform. There is no there there. Orgs will figure this out.

Cheers.

I hear you - model development might overcome the shortcomings one day.

However the "waiting out" strategy needs a timeout. It might happen that agentic crutches around LLMs will bear fruit much sooner than high-quality LLMs arrive. If you don't have a timeout or a decent exit criteria you may end up waiting indefinitely, or at least until reality of things becomes too painful to ignore.

The "ski rental problem" comes to mind here, but maybe there is another "wait it out" exit strategy?

> Frontier models will eventually eat all the tedious tailored add-ons as just part of something they can do.

I don't this makes any sense as MCP is a part of something they can do already

> Right now models have roughly all of the written knowledge available to mankind, minus some obscure held out private archives and so on.

Sorry for the nit, but this is a gross oversimplification. Most private archives are not obscure but obfuscated and largely are way more valuable training data then the publicly available ones.

Want to know how the DOD may technically tracks your phone? Private.

Want to know how to make Coca Cola at scale? Private.

Want to know what the schematic is for a Google TPU? Private.

etc etc.

His point, I believe, was that it is early in the innovation cycle and they very well be replaced quickely with different solutions/paradigms.

Well, some things fade out and some do not. How do we decide which one it is?

The reason I ask is that the pace of new things arriving is overwhelming, hence I was tempted to just ignore it. Not because things had signs of transience, but because I was drowning and didn't know where to start. That is not the same thing as actually observing signs of things being too foamy.

Agreed. I think if this is overly concerning, developing early in the innovation cycle just might not be the ideal place to be. :)

Adoption on most of these has been weak, except MCP (and whatever flavor of markdown file you like to add to your agent context)

Microsoft seems to be pushing MCP pretty hard in the Azure ecosystem. My cynical take is they are very aware of the context bloat so see it as extra inference $$.

Pure speculation, but I feel the inference money is tiny compared to the speed and permanence of Office integrations MCP enables through the consultancy swarm.

MCP lets you glue random assed parts of services to mega-ultra-high critical business initiatives with no go between. Delivered through a personalized chat interface that will tell you how sexy you are and how you deserved to win at golf yesterday… from salesman to auto interface to forever contract in minutes.

MS sells to insecurities of incompetent management and facilitates territory marking at the expense of governments and societies around the world for mega bucks. MCP, obvious as it is technically, also lets them plug a library into existing services for a quick upgrade then an atomized upsell directly to the chat interfaces of upper management.

Microsoft’s CEO has talked about his agent swarm. Much like RPA this woo appeals strongly to the barely technical.