I hacked together something similar to the concept they describe a few months ago (https://github.com/btucker/agentgit) and then ended up not actually finding it that useful and abandoning it.
I hacked together something similar to the concept they describe a few months ago (https://github.com/btucker/agentgit) and then ended up not actually finding it that useful and abandoning it.
I feel like the value would be in analyzing those rich traces with another agent to extract (failure) patterns and learnings, as part of a flywheel setup. As a human I would rarely if ever want to look at this -- I don't even have time to look at the final code itself!
> value would be in analyzing those rich traces with another agent to extract (failure) patterns and learnings
Claude Code supports hooks. This allows me to run an agent skill at the end of every agent execution to automatically determine if there were any lessons worth learning from the last session. If there were. new agent skills are automatically created or existing ones automatically updated as apporpriate.
Completely agree. But I wonder how much of that is just accomplished with well placed code comments that explain the why for future agent interactions to prevent them from misunderstanding. I have something like this in my AGENTS.md.
Try running `/insights` with Claude Code.