Interesting. I’ve been playing with something similar, at the coding agent harness message sequence level (memory, I guess). I’m looking at human driven UX for compaction and resolving/pruning dead ends

Human-driven compaction is interesting — you sidestep the "what's worth keeping" problem by putting a person in the loop. The tradeoff I've hit is that agents running autonomously need it to happen automatically or coherence degrades fast between sessions.

For pruning we landed on a last-touched timestamp + recall frequency counter per memory. Things not accessed in N sessions that were weakly formed to begin with get soft-deleted. Human review before hard delete is probably better UX if your setup allows it.

Curious what "dead ends" look like in yours; conversational chains that didn't resolve, or factual ones?

> The tradeoff I've hit is that agents running autonomously need it to happen automatically or coherence degrades fast between sessions.

Yeah that makes total sense. I wonder (and am sure the labs are doing so) if the HitL output would be good to fine tune the models used to do it autonomously?

I’m sticking with humans for the moment because I’m not sure where the boundaries lie: what actually makes it better and what makes it worse. It’s non obvious so far

Pruning “loops” has been pretty effective though, where a model gets stuck over N turns checking the same thing over and over and not breaking out of it til way later. That has been good because it gives strong context size benefits, but is also the most automatable I think

Pruning factually incorrect turns is something I’m trying, and pruning “correct” but “not correct based on my style” as well. Building a dataset of it all is fun :)