I think explicit post-training is going to be needed to make this kind of approach effective.

As this repo notes is "The secret to good memory isn't remembering more. It's knowing what to forget." But knowing what is likely to be important in the future implies a working model of the future and your place in it. It's a fully AGI complete problem: "Given my current state and goals, what am I going to find important conditioned on the likelihood of any particular future...". Anyone working with these agents knows they are hopelessly bad at modeling their own capabilities much less projecting that forward.

[dead]