Given the propensity of LLMs to hallucinate references, I'm not sure that really solves anything

I've worked on systems where we get clickable links to source documents also added to the RAG store.

It is perfectly possible to use LLMs to provide accurate context. It's just asking a SaaS product to do that purely on data it was trained on, is not how to do that.

RAG means it injects the source material in and knows the hash of it and can link you right to the source document.

I haven't seen it happen at all with RAG systems. I've built one too at work to search internal stuff, and it's pretty easy to make it spit out accurate references with hyperlinks