But couldn’t an LLM search for documents in that enterprise knowledge base just like humans do, using the same kind of queries and the same underlying search infrastructure?

I wouldn't say humans are efficient at that so no reason to copy, other than as a starting point.

Maybe not efficient, but if the LLMs can't even reach this benchmark then I'm not sure.

Yes but that would be worse than many RAG approaches, which were implemented precisely because there is no good way to cleanly search through a knowledge base for a million different reasons.

At that point, you are just doing Agentic RAG, or even just Query Review + RAG.

I mean, yeah, agentic RAG is the future. It's still RAG though.