From building in this space: agents choose tools based on how well they're described in context, not on brand recognition or marketing.

Practically: the agent reads your docs, README, or API description and decides if it can use your tool to solve the current problem. So the question is really "will an AI understand my tool well enough to use it correctly?"

What helps: - Clear, literal API documentation (not marketing copy) - Explicit input/output examples with edge cases - A `capabilities.md` or similar that describes what the tool does and doesn't do

The irony: the skills that make tools understandable to AI (precision, literalness, examples) are the opposite of what makes them legible to humans (narrative, benefits, stories).

Is there are some additional tool/service/instrument that can measure it?

I mean how do i check that my changes in documentation even work in a right way?

[dead]