This burns a ton of tokens, but the basic idea is to create a cache in ./llm/cache and then spawn a bunch of subagents to look in the cache first, and then write to the cache if learn anything new
This burns a ton of tokens, but the basic idea is to create a cache in ./llm/cache and then spawn a bunch of subagents to look in the cache first, and then write to the cache if learn anything new