I will never understand why someone would go through all the trouble of developing this cool idea, without bothering to link a demo or include sample output. I see this every day on HN.
So the only way I can see what this skill actually looks like is to download and run it myself? No thank you.
I'm still finding skill use to be far less reliable than clear instruction in AGENTS.md - I appreciate the idea is to give the agent the opportunity not to add the skill if not relevant to avoid context bloat, but there's no way (without an explicit instruction in AGENTS.md) to ensure that the agent will use the skill, and that point they might as well be any markdown file referenced at any location.
While building https://www.agentkanban.io (a Github CoPilot integrated task board), I experimented a lot with instruction placement. A single degree of separation from AGENTS.md works really well (I needed a robust means of having the agent pick up task specific IDs and so settled on a file called INSTRUCTION.md in a file managed by the tool which avoids polluting AGENTS.md as much as possible). I experimented with skills, but they were skipped too often for the tool to work as reliably as it now does.
Claude auto-injects skill descriptions into the context, and is pretty good about using them. I don’t know about the other harnesses.
I've had cases where it doesn't explicitly use a skill I've added BUT it still performs the actions described in the skill on its own more often than it did before I created the skill. I'd rather it use the skill for consistency, but having it follow most of the steps most of the time in cases I've forgotten to explicitly call out the skill is a better outcome.
in the case of Claude Code, I explicitly kick off skills I want to use by prefixing their name with a `/`
e.g.:
The SKILL.md is right there, you can just read it to see what it does.
A sample output will give the user an idea of whether the project is worth their time.
“The recipe is right there, you can just read it to see how it tastes.”