We’re RooAGI. We built Lint-AI, a Rust CLI for indexing and retrieving evidence from large AI-generated corpora.
As AI systems create more task notes, traces, and reports, storing documents isn’t the only challenge.
The real problem is finding the right evidence when the same idea appears in multiple places, often with different wording.
Lint-AI is our current retrieval layer for that problem.
What Lint-AI does currently:
* Indexes large documentation corpora. * Extracts lightweight entities and important terms. * Supports hybrid retrieval using lexical, entity, term, and graph-aware scoring * Returns chunk-level evidence with --llm-context for downstream reviewer / LLM * Use exports doc, chunk, and entity graphs.
Example:
* ./lint-ai /path/to/docs --llm-context "where docs describe the same concept differently" --result-count 8 --simplified
That command does not decide whether documents are in contradiction. It retrieves the most relevant chunks so that a reviewer layer can compare them.
Repo: https://github.com/RooAGI/Lint-AIWe’d appreciate feedback on:
* Retrieval/ranking design for documentation corpora. * How to evaluate evidence retrieval quality for alignment workflows. * What kinds of entity/relationship modeling would actually be useful here?
Visit: https://rooagi.com/