It will increasingly become common knowledge that the best practice for AI coding is small edits quite carefully planned by a human. Else the LLM will keep going down rabbit holes and failing to produce useful results without supervision.

Huge rules systems, all-encompassing automations, etc all assume that more context is better, which is simply not the case given that "context rot" is a thing.

The system is trying to solve: breaking down large projects into small tasks and assigning sub-agents to work on each task, so the code, research, test logs, etc. stay inside the agent, with only a summary being surfaced back up to the main thread.