I'm not sure we are talking about the same thing. The root comment talks about concatenating all doc files into a loong text string, and adding that as a system/user prompt to the LLM at inference time before the actual question.

You mention the retrieval stage being a SELECT *? I don't think there's any SQL involved here.

I was being rhetorical. The R in RAG is filtering augmentation data (the A) for things that might or might not be related to the query. Including everything is just a lazy form of RAG -- the rhetorical SELECT *.

>and adding that as a system/user prompt to the LLM at inference time

You understand this is all RAG is, right? RAG is any additional system to provide contextually relevant (and often more timely) supporting information to a baked model.

People sometimes project RAG out to be a specific combination of embeddings, chunking, vector DBs, etc. But that is ancillary. RAG is simply selecting the augmentation data and supplying it with the question.

Anyways, I think this thread has reached a conclusion and there really isn't much more value in it. Cheers.

I agree it isn't embeddings or Vector DBs.

I personally define it as not including loading all data in the context-window

Very new field and not a lot of reliable sources. Would be worth it to standardize meaning.