I had seen RAG mentioned a lot before I had gotten into LLM agents. I assumed it was tricky and required real deep model training knowledge.
My first real AI use (beyond copy-paste ChatGPT) was Claude Code. I figured out in a few days to just write scripts and CLAUDE.md how to use them. For instance, one that prints comments and function names in a file is a few lines of python. MCP seemed like context bloat when a `tools/my-script -h` would put it in context on request.
Eventually stumbled on some more RAG a few weeks later, so decided to read up on it and... what? That's it? A 'prelude function' to dump 'probably related' things into the context?
It seems so obviously the wrong way to go from my perspective, so am I missing something here?