I don't get why folks are so dismissive here.
If you ever saw Claude Code/Codex use grep, you will find that it constructs complex queries that encompass a whole range of keywords which may not even be present in the original user query. So the 'semantic meaning' isn't actually lost.
And nobody is putting an entire enterprise's knowledge base inside the context window. How many enterprise tasks are there that need referencing more that a dozen docs? And even those that do, can be broken down into sub-tasks of manageable size.
Lastly, nobody here mentions how much of a pain it is to build, maintain and secure an enterprise vector database. People spend months cleaning the data, chunking and vectorizing it, only for newer versions of the same data making it redundant overnight. And good look recreating your entire permissioning and access control stack on top of the vector database you just created.
The RAG obituary is a bit provocative, and maybe that's intentional. But it's surprising how negative/dismissive the reactions in this thread are.
The article is not making a proper distinction of scale and is probably due to the small scale problem that they solved. What is small scale and <10K documents / files can be easily processed with grep, find etc. For something at larger scale >1M documents etc. you will need to use search engine technology. You can definitely do the same agent approach for the large scale problem - we essentially need search, look at the results and issue follow up queries to get documents of interest. All that said, for the types of problem the OP is solving, it might just be better to create a project in Claude/ChatGPT and throw in the files there and get done with it. That approach has been working for over 2 years now and is nothing new.