> Tool Search Tool, which allows Claude to use search tools to access thousands of tools without consuming its context window

At some point, you run into the problem of having many tools that can accomplish the same task. Then you need a tool search engine, which helps you find the most relevant tool for your search keywords. But tool makers start to abuse Tool Engine Optimization (TEO) techniques to push their tools to the top of the tool rankings

We just need another tool for ranking tools via ToolRank. We'll crowdsource the ranking from a combination of user responses to the agents themselves as well as a council of LLM tool rankers.

PageRank was named after Larry Page and not because it ranked pages. So to follow the pattern, you must first find someone whose last name is Tool.

https://youtu.be/nspxAG12Cpc come to mind for anyone else?

Soon we will get promoted tools who want to show their brand to the human and agent. Pay a little extra and you can have your promotion retained in context!

Back when ChatGPT Plugins were a thing a built a small framework for auto-generating plugins that would make ChatGPT incessantly plug (hehe) a given movie:

https://chatgpt.com/share/6924d192-46c4-8004-966c-cc0e7720e5...

https://chatgpt.com/share/6924d16f-78a8-8004-8b44-54551a7a26...

https://chatgpt.com/share/6924d2be-e1ac-8004-8ed3-2497b17bf6...

They would also modify other plugins/tools just by being in the context window. Like the user asking for 'snacks' would cause the shopping plugin to be called, but with a search for 'mario themed snacks' instead of 'snacks'

I would argue that lot of the tools will be hosted on GitHub - infact, most of the existing repos are potentially a tool (in future). And the discovery is just a GitHub search

btw gh repos are already part of training the llm

So you don't even need internet to search for tools, let alone TEO

Security nightmare inbound...

The example given by Anthropic of tools filling valuable context space is a result of bad design.

If you pass the tools below to your agent, you don't need "search tool" tool, you need good old fashion architecture: limit your tools based on the state of your agent, custom tool wrappers to limit MCP tools, routing to sub-agents, etc.

Ref: GitHub: 35 tools (~26K tokens) Slack: 11 tools (~21K tokens) Sentry: 5 tools (~3K tokens) Grafana: 5 tools (~3K tokens) Splunk: 2 tools (~2K tokens)

Don't see whats wrong in letting llm decide which tool to call based on a search on long list of tools (or a binary tree of lists in case the list becomes too long, which is essentially what you eluded to with sub-agents)

I was referring to letting LLM's search github and run tools from there. That's like randomly searching the internet for code snippets and blindly running them on your production machine.

For that, we need sandboxes to run the code in an isolated environment.

Sure to protect your machine, but what about data security? Do I want to allow unknown code to be run on my private/corporate data?

Sandbox all you want but sooner or later your data can be exfiltrated. My point is giving an LLM unrestricted access to random code that can be run is a bad idea. Curate carefully is my approach.

For data security, you can run sandbox locally too. See https://github.com/instavm/coderunner

Just wait for the people to update their LinkedIn titles to TEO expert. :)

Don't give anyone any ideas. We now have SEO, GEO, AEO and now TEO? :-p