Before the first loop iteration, the harness sends a message to the LLM along the lines of.

<Skills>

  <Skill>

    <Name>postgres</Name>

    <Description>Directions on how to query the pre-prod postgres db</Description>
            <File>skills/postgres.md</File>

  </Skill>
</Skills>

The harness then may periodically resend this notification so that the LLM doesn't "forget" that skills are available. Because the notification is only name + description + file, this is cheap r.e tokens. The harness's ability to tell the LLM "IMPORTANT: this is a skill, so pay attention and use it when appropriate" and then periodically remind them of this is what differentiates a proper Anthropic-style skill from just sticking "If you need to do postgres stuff, read skills/postgres.md" in AGENTS.md. Just how valuable is this? Not sure. I suspect that a sufficiently smart LLM won't need the special skill infrastructure.

(Note that skill name is not technically required, it's just a vanity / convenience thing).

> The harness's ability to tell the LLM "IMPORTANT: this is a skill, so pay attention and use it when appropriate" and then periodically remind them of this is what differentiates

... And do we know how it does that? To my understanding there is still no out-of-band signaling.

A lot of tools these days put an extra <system> message into the conversation periodically that the user never sees. It fights against context rot and keeps important things fresh.

[dead]