LLM Memeory (in general, any implementation) is good in theory.
In practice, as it grows it gets just as messy as not having it.
In the example you have on front page you say “continue working on my project”, but you’re rarely working on just one project, you might want to have 5 or 10 in memory, each one made sense to have at the time.
So now you still have to say, “continue working on the sass project”, sure there’s some context around details, but you pay for it by filling up your llm context , and doing extra mcp calls
True! But this is a very naive implementation, a proper implementation could surpass these challenges.
Well let's talk again when the problems have been solved, then. Until then, manually curated skills and documentation will beat this
And once you're being specific about what it needs to remember you are 0 steps away from having just told AI to write and read files with the "memory"