If there’s any amount of irony in your comment, I’m missing it - and I apologize for that.

That said, people have built this without LLMs years, even decades, ago. But UX has fallen by the wayside for quite some time in the companies that used to build IDEs. Then some fresher devs come along and begin a project without the benefit of experience in a codebase with a given feature … and after some time someone writes a plugin for VSCode to provide documentation tooltips generated by LLM because “there is just no other way it can be done.”

We have language servers for most programming languages. Those language servers provide the tokens one needs to use when referencing the documentation. And it would be so much faster than waiting for an LLM to get back to you.

TBH, if anyone’s excuse is “an LLM is the only way to implement feature Q,” then they’re definitely in need of some experience in software creation.

I don't think you're wrong, but question: it's the weekend, you have an idea for something like this that you want to crank out. Is it really better for you to never ship because it takes a long time to build, or is it better to be able to ship using something like an LLM?

In my opinion the shipped product is better than the unshipped product. While of course I would prefer the version that you have designed, I sure don't have time to build it, and I'm guessing you don't either.

If this was our day jobs and we were being paid for it, it would be a much different story, but this is a hobby project made open source for the world.

I’m going to get the LLM to assist me in building it. The shipped product is not going to rely on the LLM. That’s how I get it done over a weekend.

Please post here once you have something working, or what you find if you struggle. I would be interested to see if you could get something working and would love to be wrong, as it would be nice to have something similar that does not use an LLM. It would be really cool if something more useful comes out of my experiment. I don't think you could reuse much from my codebase, but feel free to take anything from it that you want.

Some things that might be useful to know to speed you up:

1. Most code blocks on the internet are easy to find in a webpage. They generally are surrounded by `<code/>` tags. You can query for these using the method in my extension. Then you will need to filter out any code block that has a `<span/>` count <= 1, as code blocks are used to highlight arbitrary stuff on the internet and you would really just want to find the blocks of code.

2. You will need a method to identify when to generate documentation for a code block even with your implementation, as some documentation websites are one really long page. You can do this with two types of observers that I use in my codebase. One will identify when a code block is in view and the other will keep track of mutations to code blocks. You need to keep track of mutations because sites like ChatGPT continuously edit a code block while streaming a response. You want to generate documentation once it's done, as that's when the code is well formed. I have a janky example of how to do this in my extension. Claude should be able to find the code.

3. LLMs were useful for building this, but they struggled with design decisions, especially around UX. This project seems out of distribution for them. Claude probably won't suggest the right solution a lot of the time, but if you have it list out multiple options, it can usually identify which one is best.

I will probably check back here in a couple weeks if I don't hear anything from you. I would be really impressed if you can get something working in a weekend that is *not just hard coded for a specific use case, but scales well.*

Oh, one more thing: I should mention that you’ll probably want to fetch documentation only after a code block has been in view for n milliseconds (like what I do in my extension). Otherwise, you risk throttling if a user scrolls very quickly through a webpage. It should only fetch documentation once the user pauses on a block.

I agree that parsing codebases and linking code to documentation is a solved problem. I think @ramon156's suggestion to use tree-sitter or something similar to parse an abstract syntax tree makes sense.

To clarify my earlier point, I wasn't suggesting this is impossible, just that it's not *practical* to build a universal LSP that works with every language and framework out of the box without anything local to index. I don't think an reusing an LSP would be a great fit here either, since LSPs rely on having full project context, dependencies, and type information. These aren't available when analyzing code snippets on arbitrary webpages.

Parsing was never my major concern though. It's the "map tokens to URLs" part. A universal mapping for every token to every piece of documentation on the internet is *impractical* and difficult to maintain. To achieve parity without LLMs, I'd need to write and maintain parsers for every documentation website, and that assumes documentation even exists for most tokens (which it doesn't).

I think kristopolous's suggestion of grounding the LLM with data sources that keep a serialized database of documentation from many different places makes the most sense. That way, the LLM is just extracting and presenting key information from real documentation rather than generating from scratch.

There are probably ways to make this easier. Maybe an offline job that uses LLMs to keep mappings up to date. The project could also be scoped down to a single ecosystem like Rust where documentation is centralized, though that falls apart once you try to scale beyond one language as mentioned above. Maybe I could use raw definition on GitHub combined with an LSP to generate information?

Open to other suggestions on how to bridge this gap.