Yes this 100%. Every person i speak with who is excited about MCP is some LinkedIn Guru or product expert. I'm yet to encounter a seriously technical person excited by any of this.
Yes this 100%. Every person i speak with who is excited about MCP is some LinkedIn Guru or product expert. I'm yet to encounter a seriously technical person excited by any of this.
MCP, as a concept, is a great idea.
The problem isn’t having a standard way for agents to branch out. The problem is that AI is the new Javascript web framework: there’s nothing wrong with frameworks, but when everyone and their son are writing a new framework and half those frameworks barely work, you end up with a buggy, fragmented ecosystem.
I get why this happens. Startups want VC money, established companies then want to appear relevant, and then software engineers and students feel pressured to prove they’re hireable. And you end up with one giant pissing contest where half the players likely see the ridiculousness of the situation but have little choice other than to join party.
I have found MCPs to be very useful (albeit with some severe and problematic limitations in the protocol's design). You can bundle them and configure them with a desktop LLM client and distribute them to an organization via something like Jamf. In the context I work in (biotech) I've found it a pretty high-ROI way to give lots of different types of researchers access to a variety of tools and data very cheaply.
I believe you, but can you elaborate? What exactly does MCP give you in this context? How do you use it? I always get high level answers and I'm yet to be convinced, but i would love this to be one of those experiences where i walk away being wrong and learning something new.
Sure, absolutely. Before I do, let me just say, this tooling took a lot of work and problem solving to establish in the enterprise, and it's still far from perfect. MCPs are extremely useful IMO, but there are a lot of bad MCP servers out there and even good ones are NOT easy to integrate into a corporate context. So I'm certainly not surprised when I hear about frustrations. I'm far from an LLM hype man myself.
Anyway: a lot of earlier stages of drug discovery involve pulling in lots of public datasets, scouring scientific literature for information related to a molecule, a protein, a disease, etc. You join that with your own data and laboratory capabilities and commercial strategy in order to spot opportunities for new drugs that you could maybe, one day, take into the clinic. This is traditionally an extremely time consuming and bias prone activity, and whole startups have gone up around trying to make it easier.
A lot of the public datasets have MCPs someone has put together around someone's REST API. (For example, a while ago Anthropic released "Claude for Life Sciences" which was just a collection of MCPs they had developed over some popular public resources like PubMed).
For those datasets that don't have open source MCPs, and for our proprietary datasets, we stand up our own MCPs which function as gateways for e.g. running SQL queries or Spark jobs against those datasets. We also include MCPs for writing and running Python scripts using popular bioinformatics libraries, etc. We bundle them with `mcpb` so they can be made into a fully configured one-click installer you can load into desktop LLM clients like Claude Desktop or LibreChat. Then our IT team can provision these fully configured tools for everyone in our organization using MDM tools like Jamf.
We manage the underlying data with classical data engineering patterns, ETL jobs, data definition catalogs, etc, and give MCP-enabled tools to our researchers as front end concierge type tools. And once they find something they like, we also have MCPs which can help transform those queries into new views, ETL scripts, etc and serve them using our non-LLM infra, or save tables, protein renderings, graphs, etc and upload them into docs or spreadsheets to be shared with their peers. Part of the reason we have set it up this way is to work through the limitations of MCPs (e.g. all responses have to go through the context window, so you can't pass large files around or trust that it's not mangling the responses). But also we do this so as to end up with repeatable/predictable data assets instead of LLM-only workflows. After the exploration is done, the idea is you use the artifact, not the LLM, to intact with it (though of course you can interact with the artifact in an LLM-assisted workflow as you iterate once again in developing a yet another derivative artifact).
Some of why this works for us is perhaps unique to the research context where the process of deciding what to do and evaluating what has already been done is a big part of daily work. But I also think there are opportunities in other areas, e.g. SRE workflows pulling logs from Kubernetes pods and comparing to Grafana metrics, saving the result as a new dashboard, and so on.
What these workflows all have in common, IMO, is that there are humans using the LLM as an aid to dive understanding, and then translating that understanding into more traditional, reliable tools. For this reason, I tend to think that the concept of autonomous "agents" is stupid, outside of a few very narrow contexts. That is to say, once you know what you want, you are generally better off with a reliable, predictable, LLM-free application, but LLMs are very useful in the prices of figuring out what you want. And MCPs are helpful there.
This is fascinating. I really appreciate the length reply.
How do you handle versioning/updates when datasets change? Do the MCPs break or do you have some abstraction layer?
What's your hit rate on researchers actually converting LLM explorations into permanent artifacts vs just using it as a one-off?
Makes sense for research workflows. Do you think this pattern (LLM exploration > traditional tools) generalizes outside domains with high uncertainty? Or is it specifically valuable where 'deciding what to do' is the hard part?
Someone else mentioned using Chrome dev tools + Cursor, I'm going to try that one out as a way to convince myself here. I want to make this work but I just feel like I'm missing something. The problem is clearly me, so I guess i need to put in some time here.
I'll give you a short reply, as another person who finds MCP very useful. I think a big gap is that MCP's are often marketed as "taking actions" for you, because that's flashy and looks cool in the eyes of laymen. While most of their actual value is the opposite, in using them to gather information to take better non-MCP actions. Connecting them to logs, read-only to (e.g. mock) databases, knowledge bases, and so on. All for querying, not for create/update/delete.
Agree with this framing. They are like RAG setups that you can compose together without needing to build a dedicated app to do it.
> How do you handle versioning/updates when datasets change?
For data MCPs, we use remote MCPs that are served over an stdio bridge. So our configuration is just mcp-proxy[0] pointed at a fixed URL we control. The server has an /mcp endpoint that provides tools and that endpoint is hit whenever the desktop LLM starts up. So adding/removing/altering tools is simply a matter of changing that service and redeploying that API. (Note: There are sometimes complications, e.g. if I change an endpoint that used to return data directly, but now it writes a file to cloud storage and returns a URL (because the result is to large, i.e. to work around the aforementioned broken factor of MCP) we have to sync with our IT team to deploy a configuration change to everyone's machine.)
I have seen nicer implementations that use a full MCP gateway that does another proxy step to the upstream MCP servers, which I haven't used myself (though I want to). The added benefit is that you can log/track which MCPs your users are using most often and how they are doing, and you can abstract away a lot of the details of auth, monitor for security issues, etc. One of the projects I've looked at in that space is Mint MCP, but I haven't used it myself.
> What's your hit rate on researchers actually converting LLM explorations into permanent artifacts vs just using it as a one-off?
Low. Which in our case is ideal, since most research ideas can be quickly discarded and save us a ton of time and money that would otherwise be spent running doomed lab experiments, etc. As you get later in the drug discovery pipeline you have a larger team built around the program, and then the artifacts are more helpful. There still isn't much of a norm in the biotech industry of having an engineering team support an advanced drug program (a mistake, IMO) so these artifacts go a long way given these teams don't have dedicated resources.
> Do you think this pattern (LLM exploration > traditional tools) generalizes outside domains with high uncertainty?
I don't know for sure, as I don't live in that world. My instinct is: I wouldn't necessarily roll something like this out to external customers if you have a well-defined product. (IMO there just isn't that much of a market for uncertain outputs of such products, which is why all of the SaaS companies that have launched their integrated AI tools haven't seen much success with them.) But even within a domain like that, it can be useful to e.g. your customer support team, your engineers, etc. For example, one of the ideas on my "cool projects" list is an SRE toolkit that can query across K8s, Loki/Prometheus, your cloud provider, your git provider and help quickly diagnose production issues. I imagine the result of such an exploration would almost always be a new dashboard/alert/etc.
[0] https://github.com/sparfenyuk/mcp-proxy - don't know much about this repo, but it was our starting point
I use only one MCP, but I use it a lot: it's chrome devtools. I get Claude Code to test in the browser, which makes a huge difference when I want it to fix a bug I found in the browser - or if I just want it to do a real world test on something it just built.
OK this is super practical, thanks for sharing! I'm going to try this out!
I have found MCPs helpful. Recently, I used one to migrate a site from WordPress to Sanity. I pasted in the markdown from the original site and told it to create documents that matched my schemas. This was much quicker and more flexible than whipping up a singular migration tool. The Sanity MCP uses oAuth so I also didn’t need to do anything in order to connect to my protected dataset. Just log in. I’ll definitely be using this method in the future for different migrations.