the biggest issue i've had for things like this is that ai doesn't understand context very well, anything that is beyond a context window creates hallucinations and it starts making up things that may exist in one location but it tries to apply it to a completely unrelated scenario, would be curious if this does understand the connected pieces appropriately and catches things that break those connections, otherwise it's just another linter?
Definitely! Giving the AI the ability to fetch the context it needs was a big challenge (since larger codebases can't all fit in an LLM's context window) – it's not perfect yet, but the tools it has does give it a remarkable amount of insight into the overall codebase