grep was invented at a time when computers had very small amounts of memory, so small that you might not even be able to load a full text file. So you had tools that would edit one line at a time, or search through a text file one line at a time.
LLMs have a similar issue with their context windows. Go back to GPT-2 and you wouldn't have been able to load a text file into its memory. Slowly the memory is increasing, same as it did for the early computers.
Agree. It's a context/memory issue. Soon LLMs will have a 10M context window and they won't need to search. Most codebases are less than 10M tokens.
When dependencies are factored in, I don't know if this is true.