Good question. We don’t pass the entire graph into the model. The graph acts as an index over structured notes. The assistant retrieves only the relevant notes by following the graph. That keeps context size bounded and avoids dumping raw history into the model.

For contradictory or stale information, since these are based on emails and conversations, we use the timestamp of the conversation to determine the latest information when updating the corresponding note. The agent operates on that current state.

That said, handling contradictions more explicitly is something we’re thinking about. For example, flagging conflicting updates for the user to manually review and resolve. Appreciate you raising it.

> That said, handling contradictions more explicitly is something we’re thinking about.

That's a great idea. The inconsistencies in a given graph are just where attention is needed. Like an internal semantic diff. If you aim it at values it becomes a hypocrisy or moral complexity detector.

Interesting framing! We’ve mostly been thinking of inconsistencies as signals that something was missed by the system, but treating them as attention points makes sense and could actually help build trust.

This was something that I was working on for a personal solution ( flagging various contradictory threads ). I suspect it is a common use case.

That’s interesting. Would be curious to know what types of contradictions you were looking at and how you approached flagging them.