One of the things I'm using LLMs for a lot right now is quickly generating answers about larger codebases I'm completely unfamiliar with.

Anything up to 250,000 tokens I pipe into GPT-5 (prior to that o3), and beyond that I'll send them to Gemini 2.5 Pro.

For even larger code than that I'll fire up Codex CLI or Claude Code and let them grep their way to an answer.

This stuff has gotten good enough now that I no longer get stuck when new tools lack decent documentation - I'll pipe in just the source code (filtered for .go or .rs or .c files or whatever) and generate comprehensive documentation for myself from scratch.

Don't you see how this opens up a blindspot in your view of the code?

You don't have the luxury of having someone who is deeply familiar with the code sanity check your perceived understanding of the code, i.e. you don't see where the LLM is horribly off-track because you don't have sufficient understanding of that code to see the error. In enterprise contexts this is very common tho so its quite likely that a lot of the haters here have seen PRs submitted by vibecoders to their own work which have been inadequate enough that they started to blame the tool. For example I have seen someone reinvent the wheel of the session handling by a client library because they were unaware that the existing session came batteries included and the LLM didn't hesitate to write the code again for them. The code worked, everything checked out but because the developer didn't know what they didn't know about they submitted a janky mess.

The LLMs go off track all the time. I spot that when I try putting what I've learned from them into action.

[deleted]