I don't think that agents actually benefit from comments that describe what the code does at all. In my experience in the best case they don't really improve response quality and in the worst case they drastically reduce it. This is just noise that does not help the AI understand the context any better. This has already been true for a trained developer and it is even more so true for AI agents. Natural language is in almost every way less efficient in providing context and AI has no problem at all to infer intent from good code. The challenge is rather to make the AI produce good code which needs a strict harness and rules. Another good addition is semantic indexing of the codebase to help the AI find code using semantic search (which is what some agents already do quite successfully).
The only context I consistently found to be useful is about project-specific tool calling. Trying to provide natural language context about the project itself always proved to be ambiguous, inaccurate and out-of-date. Agents are very good at reading code and code is the best way to express context unambiguously.
You can have perfectly good code, which is perfectly easy to understand which nevertheless _does not do what you intended to do_. That is why tests exist, after all.