I don't understand how skills are different than just instructing your model to read all the front-matters from a given folder on your filesystem and then decide if they need to read the file body.
I don't understand how skills are different than just instructing your model to read all the front-matters from a given folder on your filesystem and then decide if they need to read the file body.
That is basically what it is tho.
One difference is the model might have been trained/fine-tuned to be better at "read all the front-matters from a given folder on your filesystem and then decide..." compared a model with those instructions only in its context.
Also, does your method run scripts and code in any kind of sandbox or other containment or do you give it complete access to your system? #yolo
Not my method really, just a comparison. I didn't know about the sandbox.
I see there might be advantages. The manual alternative could be tweaked further though. For example you might make it hierarchical.
Or you could create an "howTo" MCP with more advanced search capabilities. (or a grandma MCP to ask advice to after a failure)
Interesting topic, I guess has found a real best practice, everybody is still exploring.
Yes I'm confused as well, it feels like it's still all prompting which isn't new or different in the LLM space.
It’s all just loading data into the context/conversation. Sometimes as part of the chat response the LLM will request for the client do something - read a file, call a tool, etc. The results of which end up back in the context as well.
Post training :)