I agree that parsing codebases and linking code to documentation is a solved problem. I think @ramon156's suggestion to use tree-sitter or something similar to parse an abstract syntax tree makes sense.

To clarify my earlier point, I wasn't suggesting this is impossible, just that it's not *practical* to build a universal LSP that works with every language and framework out of the box without anything local to index. I don't think an reusing an LSP would be a great fit here either, since LSPs rely on having full project context, dependencies, and type information. These aren't available when analyzing code snippets on arbitrary webpages.

Parsing was never my major concern though. It's the "map tokens to URLs" part. A universal mapping for every token to every piece of documentation on the internet is *impractical* and difficult to maintain. To achieve parity without LLMs, I'd need to write and maintain parsers for every documentation website, and that assumes documentation even exists for most tokens (which it doesn't).

I think kristopolous's suggestion of grounding the LLM with data sources that keep a serialized database of documentation from many different places makes the most sense. That way, the LLM is just extracting and presenting key information from real documentation rather than generating from scratch.

There are probably ways to make this easier. Maybe an offline job that uses LLMs to keep mappings up to date. The project could also be scoped down to a single ecosystem like Rust where documentation is centralized, though that falls apart once you try to scale beyond one language as mentioned above. Maybe I could use raw definition on GitHub combined with an LSP to generate information?

Open to other suggestions on how to bridge this gap.