> The agent follows references like a human analyst would. No chunks. No embeddings. No reranking. Just intelligent navigation.

I think this sums it up well. Working with LLMs is already confusing and unpredictable. Adding a convoluted RAG pipeline (unless it is truly necessary because of context size limitations) only makes things worse compared to simply emulating what we would normally do.