Rolled my own in Python.

For graph/tree document representations, it’s common in RAG to use summaries and aggregation. For example, the search yields a match on a chunk, but you want to include context from adjacent chunks — either laterally, in the same document section, or vertically, going up a level to include the title and summary of the parent node. How you integrate and aggregate the surrounding context is up to you. Different RAG systems handle it differently, each with its own trade offs. The point is that the system is static and hardcoded.

The agentic approach is: instead of trying to synthesize and rank/re-rank your search results into a single deliverable, why not leave that to the LLM, which can dynamically traverse your data. For a document tree, I would try exposing the tree structure to the LLM. Return the result with pointers to relevant neighbor nodes, each with a short description. Then the LLM can decide, based on what it finds, to run a new search or explore local nodes.