The reason they are long is because these skills are produced mostly by Claude Code and Opus and no sensible human will read these files, let alone build a mental model around them. There is just layers of assumptions that this works - when in reality it doesn't and it is wasteful.
Here is a fun experiment.
Ask any LLM to write something vaguely familiar. For example, ask it "write a fib". Since almost all LLMs are fine tuned on code, I find that all of them will respond with a fibonacci sequence algorithm even-though to a non-programmer "write a fib" means to write an unimportant lie.
So there is compression. You can express an outcome in just 3 vague tokens without going into details what exactly is a fibonacci sequence.
That should be enough to understand that the length of the prompt does not matter. What matters is the right words, frequency and order. You can write two page prompt or two sentence prompt and both can have the same outcome.
I just tried it with Gemini pro. I think this answer is about as good as you can expect for such an ambiguous question.
Write a fib
Since "fib" can mean a couple of different things, I've got you covered for both!
1. A Little Lie (A Fib) "I'm actually typing this to you from a sunny beach in the Bahamas, sipping a piña colada." (Since I'm an AI, that is definitely a fib!)
2. The Fibonacci Sequence If you meant the classic programming exercise, here is a Python function...
I stand to be corrected. Though I tried again just now and this is what Gemini Pro produced:
> I'm assuming you mean a Fibonacci sequence generator! I'll write a Python script that includes both an iterative and a recursive way to generate Fibonacci numbers.
... and then wrote some python code.
[delayed]
[flagged]