Actually lately I’ve been feeling the other way around with it. The LLM catches things I would have overlooked. I ask for a new feature in a certain file, and the LLM suggests fixing a tangentially related file to accommodate the new feature without breaking something else. Maybe this is just the crap legacy codebase I’m working with and how tangled up everything is, but I definitely have found several times now that it caught things I would have missed.

> The LLM catches things I would have overlooked. I ask for a new feature in a certain file, and the LLM suggests fixing a tangentially related file to accommodate the new feature without breaking something e

What are you using? Do you think this behavior is in response to prompting? My goal at times is to "rabbit hole" the LLM to get it to go down rabbit holes and find bigger and bigger picture issues until it homes in on something fundamentally broken that could have big impact if fixed. But it's not trivial to push the agent in that direction for me.