Despite thinking this is AI-generated, I agree but everything has a caveat.
Skills are good for instilling non-repeatable, yet intuitive or institutional knowledge.
MCP’s are great for custom, repeatable tasks. After 5-10 runs of watching my LLM write the same exact script, I just asked it to hardcode the solution and make it a tool. The result is runs are way faster and repeatable.
I would go further than this. Call the script yourself (or via some other mechanism/program) and then give the results to the LLM.
The majority of processes don't need nearly as many decision making points as an agent could deal with and look somewhat like this:
1. gather raw information => script
2. turn it into structured data => script
3. produce an actionable plan => script/user/agent (depends)
4. validate the plan => user
5. narrow down the implementation workflow and the set of tools needed => user/agent
6. follow workflow iteratively => user/agent
Doesn't need to be this exact shape, but the lesson I learned is to quasi front load and structure as much as possible with scripts and data. That can be done with agent assistance as well, for example by watching it do the task, or a similar one, in freeform at first.
There's definitely some optimization that can occur, like an orchestrator or Ralph.
You could hardcode the script as a file within a skill too right? Skills can contain code, not just markdown files.
Have not tried, but interesting. I guess my concern would be the Skill still takes up context space, where as MCP is just using CPU.
> Despite thinking this is AI-generated, I agree but everything has a caveat.
Definitely not AI generated. I wrote this during a non-internet flight. :)
Haha sorry for the callout! Saw the M-dash and auto-assumed. Nice write up and thanks for sharing :)
> Skills are good for instilling non-repeatable, yet intuitive or institutional knowledge.
Maybe I'm misinterpreting you, but can you explain this more? I've been using skills for repeatable tasks. Why an MCP instead?
Saying "non-repeatable" was probably wrong. "Unique" might be better. Things LLMs arent naturally able to do or infer.
If the model can figure it out with tokens, but my institutional knowledge MCP tool can do it with a few CPU cycles, it’s faster and deterministic and repeatable.
It is all about API contracts, right?
After the first run, you have a script and an API: the agent discovery mechanism is a detail. If the script is small enough, and the task custom enough, you could simply add the script to the context and say "use this, adapt if needed".
Or am I misunderstanding you?
> Skills are good for instilling non-repeatable, yet intuitive or institutional knowledge.
What about just putting that sort of thing in human-targeted documentation? Why call it a “skill” and hide it somewhere a human is less likely to look?
(Skills are nice for providing /shortcuts.)