At Zenning AI, a generalist AI designed to replace entire jobs with just prompts. Our agents typically run autonomously for hours, so effective context management is critical. I'd say that we invest most of our engineering effort into what is ultimately context management, such as:

1. Multi-agent orchestration 2. Summarising and chunking large tool and agent responses 3. Passing large context objects by reference between agents and tools

Two things to note that might be interesting to the community:

Firstly, when managing context, I recommend adding some evals to our context management flow, so you can measure effectiveness as you add improvements and changes.

For example, our evals will measure the impact of using Anthropics memory over time. Thus allowing our team to make a better informed decisions on that tools to use with our agents.

Secondly, there's a tradeoff not mentioned in this article: speed vs. accuracy. Faster summarisation (or 'compaction') comes at a cost of accuracy. If you want good compaction, it can be slow. Depending on the use case, you should adjust your compaction strategy accordingly. For example, (forgive my major generalisation), for consumer facing products speed is usually preferred over a bump in accuracy. However, in business accuracy is generally preferred over speed.

lol, good luck with that