Interesting observation. After a human is done writing code, they still have a memory of why they made the choices they made. With an LLM, the context window is severely limited compared to a brain, so this information is usually thrown away when the feature is done, and so you cannot go back and ask the LLM why something is the way it is.

Yup; in the moment, you can just have the LLM dump its reasoning into the comments (we use idiomatic `<remarks></remarks>` for C# and JSDoc `@remarks`).

Future agents see the past reasoning as it `greps` through code. Good especially for non-obvious context like business and domain-level decisions that were in the prompt, but may not show in the code.

I can't prove this, but I'm also guessing that this improves the LLM's output since it writes the comment first and then writes the code so it is writing a mini-spec right before it outputs the tokens for the function (would make an interesting research paper)