Honestly, it's mostly just some random LSP adapter I forked and fixed a few bugs on, and it's not even that comprehensive but it goes a long way and seems most essential. Then I have some notes in the long term context about how to use a combination of gh CLI and cargo docs to read documentation and dependency source code/examples.
A few things beyond your question, for anyone curious:
I've also poked around with a custom MCP server that attempts to teach the LLM how to use ast-grep, but that didn't really work as hoped. It helps sometimes but my next shot on that project will be to rely on GritQL. Smaller LLMs stumble over the YAML indentation. GritQL is more like a template language for AST aware code transformations.
Lastly, there are probably a lot of little things in my long term context that help get into a successful flow. I wouldn't be surprised if a key difference between getting good results and getting bad results with these agentic LLM tools is how people are reacting to failures. If a failure makes you immediately throw up your hands and give up, you're not doing it right. If instead you press the little '#' (in claude code) and enter some instructions to the long term context memory, you'll get results. It's about persistence and really learning to understand these things as tools.
Interesting, so an LSP MCP like this?: https://github.com/isaacphi/mcp-language-server
Also interesting note on the docs, though, Claude does try to use cargo doc by itself sometimes.
I was actually wondering why GritQL did not have an MCP, this seems like a natural fit. Would be interested to know if this works for you.
I'm always a bit hesitant to add things to the long term context as it feels very finicky to not have it be ignored and having more seems to make it more likely to be ignored. Instead I usually just repeat myself.
Thank you for the answer, seems there is still lots of things to try.